Feb 02 10:32:27 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Feb 02 10:32:27 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 02 10:32:27 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 02 10:32:27 localhost kernel: BIOS-provided physical RAM map:
Feb 02 10:32:27 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 02 10:32:27 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 02 10:32:27 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 02 10:32:27 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 02 10:32:27 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 02 10:32:27 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 02 10:32:27 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 02 10:32:27 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb 02 10:32:27 localhost kernel: NX (Execute Disable) protection: active
Feb 02 10:32:27 localhost kernel: APIC: Static calls initialized
Feb 02 10:32:27 localhost kernel: SMBIOS 2.8 present.
Feb 02 10:32:27 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 02 10:32:27 localhost kernel: Hypervisor detected: KVM
Feb 02 10:32:27 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 02 10:32:27 localhost kernel: kvm-clock: using sched offset of 5519899769 cycles
Feb 02 10:32:27 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 02 10:32:27 localhost kernel: tsc: Detected 2800.000 MHz processor
Feb 02 10:32:27 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 02 10:32:27 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 02 10:32:27 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb 02 10:32:27 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 02 10:32:27 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 02 10:32:27 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 02 10:32:27 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 02 10:32:27 localhost kernel: Using GB pages for direct mapping
Feb 02 10:32:27 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Feb 02 10:32:27 localhost kernel: ACPI: Early table checksum verification disabled
Feb 02 10:32:27 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 02 10:32:27 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 02 10:32:27 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 02 10:32:27 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 02 10:32:27 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 02 10:32:27 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 02 10:32:27 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 02 10:32:27 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 02 10:32:27 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 02 10:32:27 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 02 10:32:27 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 02 10:32:27 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 02 10:32:27 localhost kernel: No NUMA configuration found
Feb 02 10:32:27 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb 02 10:32:27 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Feb 02 10:32:27 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb 02 10:32:27 localhost kernel: Zone ranges:
Feb 02 10:32:27 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 02 10:32:27 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 02 10:32:27 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb 02 10:32:27 localhost kernel:   Device   empty
Feb 02 10:32:27 localhost kernel: Movable zone start for each node
Feb 02 10:32:27 localhost kernel: Early memory node ranges
Feb 02 10:32:27 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 02 10:32:27 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 02 10:32:27 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb 02 10:32:27 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb 02 10:32:27 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 02 10:32:27 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 02 10:32:27 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 02 10:32:27 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Feb 02 10:32:27 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 02 10:32:27 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 02 10:32:27 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 02 10:32:27 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 02 10:32:27 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 02 10:32:27 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 02 10:32:27 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 02 10:32:27 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 02 10:32:27 localhost kernel: TSC deadline timer available
Feb 02 10:32:27 localhost kernel: CPU topo: Max. logical packages:   8
Feb 02 10:32:27 localhost kernel: CPU topo: Max. logical dies:       8
Feb 02 10:32:27 localhost kernel: CPU topo: Max. dies per package:   1
Feb 02 10:32:27 localhost kernel: CPU topo: Max. threads per core:   1
Feb 02 10:32:27 localhost kernel: CPU topo: Num. cores per package:     1
Feb 02 10:32:27 localhost kernel: CPU topo: Num. threads per package:   1
Feb 02 10:32:27 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb 02 10:32:27 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 02 10:32:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 02 10:32:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 02 10:32:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 02 10:32:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 02 10:32:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 02 10:32:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 02 10:32:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 02 10:32:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 02 10:32:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 02 10:32:27 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 02 10:32:27 localhost kernel: Booting paravirtualized kernel on KVM
Feb 02 10:32:27 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 02 10:32:27 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 02 10:32:27 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb 02 10:32:27 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Feb 02 10:32:27 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Feb 02 10:32:27 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 02 10:32:27 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 02 10:32:27 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Feb 02 10:32:27 localhost kernel: random: crng init done
Feb 02 10:32:27 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 02 10:32:27 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 02 10:32:27 localhost kernel: Fallback order for Node 0: 0 
Feb 02 10:32:27 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb 02 10:32:27 localhost kernel: Policy zone: Normal
Feb 02 10:32:27 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 02 10:32:27 localhost kernel: software IO TLB: area num 8.
Feb 02 10:32:27 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 02 10:32:27 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Feb 02 10:32:27 localhost kernel: ftrace: allocated 194 pages with 3 groups
Feb 02 10:32:27 localhost kernel: Dynamic Preempt: voluntary
Feb 02 10:32:27 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 02 10:32:27 localhost kernel: rcu:         RCU event tracing is enabled.
Feb 02 10:32:27 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 02 10:32:27 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Feb 02 10:32:27 localhost kernel:         Rude variant of Tasks RCU enabled.
Feb 02 10:32:27 localhost kernel:         Tracing variant of Tasks RCU enabled.
Feb 02 10:32:27 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 02 10:32:27 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 02 10:32:27 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 02 10:32:27 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 02 10:32:27 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 02 10:32:27 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 02 10:32:27 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 02 10:32:27 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 02 10:32:27 localhost kernel: Console: colour VGA+ 80x25
Feb 02 10:32:27 localhost kernel: printk: console [ttyS0] enabled
Feb 02 10:32:27 localhost kernel: ACPI: Core revision 20230331
Feb 02 10:32:27 localhost kernel: APIC: Switch to symmetric I/O mode setup
Feb 02 10:32:27 localhost kernel: x2apic enabled
Feb 02 10:32:27 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Feb 02 10:32:27 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 02 10:32:27 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb 02 10:32:27 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 02 10:32:27 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 02 10:32:27 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 02 10:32:27 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb 02 10:32:27 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 02 10:32:27 localhost kernel: Spectre V2 : Mitigation: Retpolines
Feb 02 10:32:27 localhost kernel: RETBleed: Mitigation: untrained return thunk
Feb 02 10:32:27 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb 02 10:32:27 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 02 10:32:27 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb 02 10:32:27 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 02 10:32:27 localhost kernel: active return thunk: retbleed_return_thunk
Feb 02 10:32:27 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 02 10:32:27 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 02 10:32:27 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 02 10:32:27 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 02 10:32:27 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 02 10:32:27 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb 02 10:32:27 localhost kernel: Freeing SMP alternatives memory: 40K
Feb 02 10:32:27 localhost kernel: pid_max: default: 32768 minimum: 301
Feb 02 10:32:27 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb 02 10:32:27 localhost kernel: landlock: Up and running.
Feb 02 10:32:27 localhost kernel: Yama: becoming mindful.
Feb 02 10:32:27 localhost kernel: SELinux:  Initializing.
Feb 02 10:32:27 localhost kernel: LSM support for eBPF active
Feb 02 10:32:27 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 02 10:32:27 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 02 10:32:27 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 02 10:32:27 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 02 10:32:27 localhost kernel: ... version:                0
Feb 02 10:32:27 localhost kernel: ... bit width:              48
Feb 02 10:32:27 localhost kernel: ... generic registers:      6
Feb 02 10:32:27 localhost kernel: ... value mask:             0000ffffffffffff
Feb 02 10:32:27 localhost kernel: ... max period:             00007fffffffffff
Feb 02 10:32:27 localhost kernel: ... fixed-purpose events:   0
Feb 02 10:32:27 localhost kernel: ... event mask:             000000000000003f
Feb 02 10:32:27 localhost kernel: signal: max sigframe size: 1776
Feb 02 10:32:27 localhost kernel: rcu: Hierarchical SRCU implementation.
Feb 02 10:32:27 localhost kernel: rcu:         Max phase no-delay instances is 400.
Feb 02 10:32:27 localhost kernel: smp: Bringing up secondary CPUs ...
Feb 02 10:32:27 localhost kernel: smpboot: x86: Booting SMP configuration:
Feb 02 10:32:27 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 02 10:32:27 localhost kernel: smp: Brought up 1 node, 8 CPUs
Feb 02 10:32:27 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb 02 10:32:27 localhost kernel: node 0 deferred pages initialised in 9ms
Feb 02 10:32:27 localhost kernel: Memory: 7763776K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618404K reserved, 0K cma-reserved)
Feb 02 10:32:27 localhost kernel: devtmpfs: initialized
Feb 02 10:32:27 localhost kernel: x86/mm: Memory block size: 128MB
Feb 02 10:32:27 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 02 10:32:27 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb 02 10:32:27 localhost kernel: pinctrl core: initialized pinctrl subsystem
Feb 02 10:32:27 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 02 10:32:27 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb 02 10:32:27 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 02 10:32:27 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 02 10:32:27 localhost kernel: audit: initializing netlink subsys (disabled)
Feb 02 10:32:27 localhost kernel: audit: type=2000 audit(1770028345.639:1): state=initialized audit_enabled=0 res=1
Feb 02 10:32:27 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 02 10:32:27 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 02 10:32:27 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 02 10:32:27 localhost kernel: cpuidle: using governor menu
Feb 02 10:32:27 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 02 10:32:27 localhost kernel: PCI: Using configuration type 1 for base access
Feb 02 10:32:27 localhost kernel: PCI: Using configuration type 1 for extended access
Feb 02 10:32:27 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 02 10:32:27 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 02 10:32:27 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 02 10:32:27 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 02 10:32:27 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 02 10:32:27 localhost kernel: Demotion targets for Node 0: null
Feb 02 10:32:27 localhost kernel: cryptd: max_cpu_qlen set to 1000
Feb 02 10:32:27 localhost kernel: ACPI: Added _OSI(Module Device)
Feb 02 10:32:27 localhost kernel: ACPI: Added _OSI(Processor Device)
Feb 02 10:32:27 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 02 10:32:27 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 02 10:32:27 localhost kernel: ACPI: Interpreter enabled
Feb 02 10:32:27 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 02 10:32:27 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Feb 02 10:32:27 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 02 10:32:27 localhost kernel: PCI: Using E820 reservations for host bridge windows
Feb 02 10:32:27 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 02 10:32:27 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 02 10:32:27 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [3] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [4] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [5] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [6] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [7] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [8] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [9] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [10] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [11] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [12] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [13] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [14] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [15] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [16] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [17] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [18] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [19] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [20] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [21] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [22] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [23] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [24] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [25] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [26] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [27] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [28] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [29] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [30] registered
Feb 02 10:32:27 localhost kernel: acpiphp: Slot [31] registered
Feb 02 10:32:27 localhost kernel: PCI host bridge to bus 0000:00
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 02 10:32:27 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 02 10:32:27 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb 02 10:32:27 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb 02 10:32:27 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 02 10:32:27 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb 02 10:32:27 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb 02 10:32:27 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 02 10:32:27 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 02 10:32:27 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb 02 10:32:27 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb 02 10:32:27 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 02 10:32:27 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb 02 10:32:27 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb 02 10:32:27 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb 02 10:32:27 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb 02 10:32:27 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 02 10:32:27 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb 02 10:32:27 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb 02 10:32:27 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 02 10:32:27 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb 02 10:32:27 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb 02 10:32:27 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 02 10:32:27 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 02 10:32:27 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 02 10:32:27 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 02 10:32:27 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 02 10:32:27 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 02 10:32:27 localhost kernel: iommu: Default domain type: Translated
Feb 02 10:32:27 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 02 10:32:27 localhost kernel: SCSI subsystem initialized
Feb 02 10:32:27 localhost kernel: ACPI: bus type USB registered
Feb 02 10:32:27 localhost kernel: usbcore: registered new interface driver usbfs
Feb 02 10:32:27 localhost kernel: usbcore: registered new interface driver hub
Feb 02 10:32:27 localhost kernel: usbcore: registered new device driver usb
Feb 02 10:32:27 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 02 10:32:27 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 02 10:32:27 localhost kernel: PTP clock support registered
Feb 02 10:32:27 localhost kernel: EDAC MC: Ver: 3.0.0
Feb 02 10:32:27 localhost kernel: NetLabel: Initializing
Feb 02 10:32:27 localhost kernel: NetLabel:  domain hash size = 128
Feb 02 10:32:27 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 02 10:32:27 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Feb 02 10:32:27 localhost kernel: PCI: Using ACPI for IRQ routing
Feb 02 10:32:27 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 02 10:32:27 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 02 10:32:27 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Feb 02 10:32:27 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 02 10:32:27 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 02 10:32:27 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 02 10:32:27 localhost kernel: vgaarb: loaded
Feb 02 10:32:27 localhost kernel: clocksource: Switched to clocksource kvm-clock
Feb 02 10:32:27 localhost kernel: VFS: Disk quotas dquot_6.6.0
Feb 02 10:32:27 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 02 10:32:27 localhost kernel: pnp: PnP ACPI init
Feb 02 10:32:27 localhost kernel: pnp 00:03: [dma 2]
Feb 02 10:32:27 localhost kernel: pnp: PnP ACPI: found 5 devices
Feb 02 10:32:27 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 02 10:32:27 localhost kernel: NET: Registered PF_INET protocol family
Feb 02 10:32:27 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 02 10:32:27 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 02 10:32:27 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 02 10:32:27 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 02 10:32:27 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 02 10:32:27 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 02 10:32:27 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb 02 10:32:27 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 02 10:32:27 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 02 10:32:27 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 02 10:32:27 localhost kernel: NET: Registered PF_XDP protocol family
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 02 10:32:27 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 02 10:32:27 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 02 10:32:27 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 02 10:32:27 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 24292 usecs
Feb 02 10:32:27 localhost kernel: PCI: CLS 0 bytes, default 64
Feb 02 10:32:27 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 02 10:32:27 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 02 10:32:27 localhost kernel: Trying to unpack rootfs image as initramfs...
Feb 02 10:32:27 localhost kernel: ACPI: bus type thunderbolt registered
Feb 02 10:32:27 localhost kernel: Initialise system trusted keyrings
Feb 02 10:32:27 localhost kernel: Key type blacklist registered
Feb 02 10:32:27 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb 02 10:32:27 localhost kernel: zbud: loaded
Feb 02 10:32:27 localhost kernel: integrity: Platform Keyring initialized
Feb 02 10:32:27 localhost kernel: integrity: Machine keyring initialized
Feb 02 10:32:27 localhost kernel: Freeing initrd memory: 88000K
Feb 02 10:32:27 localhost kernel: NET: Registered PF_ALG protocol family
Feb 02 10:32:27 localhost kernel: xor: automatically using best checksumming function   avx       
Feb 02 10:32:27 localhost kernel: Key type asymmetric registered
Feb 02 10:32:27 localhost kernel: Asymmetric key parser 'x509' registered
Feb 02 10:32:27 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 02 10:32:27 localhost kernel: io scheduler mq-deadline registered
Feb 02 10:32:27 localhost kernel: io scheduler kyber registered
Feb 02 10:32:27 localhost kernel: io scheduler bfq registered
Feb 02 10:32:27 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 02 10:32:27 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 02 10:32:27 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 02 10:32:27 localhost kernel: ACPI: button: Power Button [PWRF]
Feb 02 10:32:27 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 02 10:32:27 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 02 10:32:27 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 02 10:32:27 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 02 10:32:27 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 02 10:32:27 localhost kernel: Non-volatile memory driver v1.3
Feb 02 10:32:27 localhost kernel: rdac: device handler registered
Feb 02 10:32:27 localhost kernel: hp_sw: device handler registered
Feb 02 10:32:27 localhost kernel: emc: device handler registered
Feb 02 10:32:27 localhost kernel: alua: device handler registered
Feb 02 10:32:27 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 02 10:32:27 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 02 10:32:27 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 02 10:32:27 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 02 10:32:27 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 02 10:32:27 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 02 10:32:27 localhost kernel: usb usb1: Product: UHCI Host Controller
Feb 02 10:32:27 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Feb 02 10:32:27 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 02 10:32:27 localhost kernel: hub 1-0:1.0: USB hub found
Feb 02 10:32:27 localhost kernel: hub 1-0:1.0: 2 ports detected
Feb 02 10:32:27 localhost kernel: usbcore: registered new interface driver usbserial_generic
Feb 02 10:32:27 localhost kernel: usbserial: USB Serial support registered for generic
Feb 02 10:32:27 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 02 10:32:27 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 02 10:32:27 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 02 10:32:27 localhost kernel: mousedev: PS/2 mouse device common for all mice
Feb 02 10:32:27 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 02 10:32:27 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 02 10:32:27 localhost kernel: rtc_cmos 00:04: registered as rtc0
Feb 02 10:32:27 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-02T10:32:26 UTC (1770028346)
Feb 02 10:32:27 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 02 10:32:27 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb 02 10:32:27 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 02 10:32:27 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 02 10:32:27 localhost kernel: usbcore: registered new interface driver usbhid
Feb 02 10:32:27 localhost kernel: usbhid: USB HID core driver
Feb 02 10:32:27 localhost kernel: drop_monitor: Initializing network drop monitor service
Feb 02 10:32:27 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 02 10:32:27 localhost kernel: Initializing XFRM netlink socket
Feb 02 10:32:27 localhost kernel: NET: Registered PF_INET6 protocol family
Feb 02 10:32:27 localhost kernel: Segment Routing with IPv6
Feb 02 10:32:27 localhost kernel: NET: Registered PF_PACKET protocol family
Feb 02 10:32:27 localhost kernel: mpls_gso: MPLS GSO support
Feb 02 10:32:27 localhost kernel: IPI shorthand broadcast: enabled
Feb 02 10:32:27 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Feb 02 10:32:27 localhost kernel: AES CTR mode by8 optimization enabled
Feb 02 10:32:27 localhost kernel: sched_clock: Marking stable (856003140, 150411870)->(1116441270, -110026260)
Feb 02 10:32:27 localhost kernel: registered taskstats version 1
Feb 02 10:32:27 localhost kernel: Loading compiled-in X.509 certificates
Feb 02 10:32:27 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb 02 10:32:27 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 02 10:32:27 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 02 10:32:27 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb 02 10:32:27 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb 02 10:32:27 localhost kernel: Demotion targets for Node 0: null
Feb 02 10:32:27 localhost kernel: page_owner is disabled
Feb 02 10:32:27 localhost kernel: Key type .fscrypt registered
Feb 02 10:32:27 localhost kernel: Key type fscrypt-provisioning registered
Feb 02 10:32:27 localhost kernel: Key type big_key registered
Feb 02 10:32:27 localhost kernel: Key type encrypted registered
Feb 02 10:32:27 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 02 10:32:27 localhost kernel: Loading compiled-in module X.509 certificates
Feb 02 10:32:27 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb 02 10:32:27 localhost kernel: ima: Allocated hash algorithm: sha256
Feb 02 10:32:27 localhost kernel: ima: No architecture policies found
Feb 02 10:32:27 localhost kernel: evm: Initialising EVM extended attributes:
Feb 02 10:32:27 localhost kernel: evm: security.selinux
Feb 02 10:32:27 localhost kernel: evm: security.SMACK64 (disabled)
Feb 02 10:32:27 localhost kernel: evm: security.SMACK64EXEC (disabled)
Feb 02 10:32:27 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 02 10:32:27 localhost kernel: evm: security.SMACK64MMAP (disabled)
Feb 02 10:32:27 localhost kernel: evm: security.apparmor (disabled)
Feb 02 10:32:27 localhost kernel: evm: security.ima
Feb 02 10:32:27 localhost kernel: evm: security.capability
Feb 02 10:32:27 localhost kernel: evm: HMAC attrs: 0x1
Feb 02 10:32:27 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 02 10:32:27 localhost kernel: Running certificate verification RSA selftest
Feb 02 10:32:27 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 02 10:32:27 localhost kernel: Running certificate verification ECDSA selftest
Feb 02 10:32:27 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb 02 10:32:27 localhost kernel: clk: Disabling unused clocks
Feb 02 10:32:27 localhost kernel: Freeing unused decrypted memory: 2028K
Feb 02 10:32:27 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Feb 02 10:32:27 localhost kernel: Write protecting the kernel read-only data: 30720k
Feb 02 10:32:27 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Feb 02 10:32:27 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 02 10:32:27 localhost kernel: Run /init as init process
Feb 02 10:32:27 localhost kernel:   with arguments:
Feb 02 10:32:27 localhost kernel:     /init
Feb 02 10:32:27 localhost kernel:   with environment:
Feb 02 10:32:27 localhost kernel:     HOME=/
Feb 02 10:32:27 localhost kernel:     TERM=linux
Feb 02 10:32:27 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Feb 02 10:32:27 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 02 10:32:27 localhost systemd[1]: Detected virtualization kvm.
Feb 02 10:32:27 localhost systemd[1]: Detected architecture x86-64.
Feb 02 10:32:27 localhost systemd[1]: Running in initrd.
Feb 02 10:32:27 localhost systemd[1]: No hostname configured, using default hostname.
Feb 02 10:32:27 localhost systemd[1]: Hostname set to <localhost>.
Feb 02 10:32:27 localhost systemd[1]: Initializing machine ID from VM UUID.
Feb 02 10:32:27 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 02 10:32:27 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 02 10:32:27 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Feb 02 10:32:27 localhost kernel: usb 1-1: Manufacturer: QEMU
Feb 02 10:32:27 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 02 10:32:27 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 02 10:32:27 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Feb 02 10:32:27 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 02 10:32:27 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 02 10:32:27 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 02 10:32:27 localhost systemd[1]: Reached target Initrd /usr File System.
Feb 02 10:32:27 localhost systemd[1]: Reached target Local File Systems.
Feb 02 10:32:27 localhost systemd[1]: Reached target Path Units.
Feb 02 10:32:27 localhost systemd[1]: Reached target Slice Units.
Feb 02 10:32:27 localhost systemd[1]: Reached target Swaps.
Feb 02 10:32:27 localhost systemd[1]: Reached target Timer Units.
Feb 02 10:32:27 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 02 10:32:27 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Feb 02 10:32:27 localhost systemd[1]: Listening on Journal Socket.
Feb 02 10:32:27 localhost systemd[1]: Listening on udev Control Socket.
Feb 02 10:32:27 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 02 10:32:27 localhost systemd[1]: Reached target Socket Units.
Feb 02 10:32:27 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 02 10:32:27 localhost systemd[1]: Starting Journal Service...
Feb 02 10:32:27 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 02 10:32:27 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 02 10:32:27 localhost systemd[1]: Starting Create System Users...
Feb 02 10:32:27 localhost systemd[1]: Starting Setup Virtual Console...
Feb 02 10:32:27 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 02 10:32:27 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 02 10:32:27 localhost systemd-journald[304]: Journal started
Feb 02 10:32:27 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/f6638e847d324f679114f32b50ad8ee5) is 8.0M, max 153.6M, 145.6M free.
Feb 02 10:32:27 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Feb 02 10:32:27 localhost systemd[1]: Started Journal Service.
Feb 02 10:32:27 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Feb 02 10:32:27 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 02 10:32:27 localhost systemd[1]: Finished Create System Users.
Feb 02 10:32:27 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 02 10:32:27 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 02 10:32:27 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 02 10:32:27 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 02 10:32:27 localhost systemd[1]: Finished Setup Virtual Console.
Feb 02 10:32:27 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 02 10:32:27 localhost systemd[1]: Starting dracut cmdline hook...
Feb 02 10:32:27 localhost dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Feb 02 10:32:27 localhost dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 02 10:32:27 localhost systemd[1]: Finished dracut cmdline hook.
Feb 02 10:32:27 localhost systemd[1]: Starting dracut pre-udev hook...
Feb 02 10:32:27 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 02 10:32:27 localhost kernel: device-mapper: uevent: version 1.0.3
Feb 02 10:32:27 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb 02 10:32:27 localhost kernel: RPC: Registered named UNIX socket transport module.
Feb 02 10:32:27 localhost kernel: RPC: Registered udp transport module.
Feb 02 10:32:27 localhost kernel: RPC: Registered tcp transport module.
Feb 02 10:32:27 localhost kernel: RPC: Registered tcp-with-tls transport module.
Feb 02 10:32:27 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 02 10:32:27 localhost rpc.statd[439]: Version 2.5.4 starting
Feb 02 10:32:27 localhost rpc.statd[439]: Initializing NSM state
Feb 02 10:32:27 localhost rpc.idmapd[444]: Setting log level to 0
Feb 02 10:32:27 localhost systemd[1]: Finished dracut pre-udev hook.
Feb 02 10:32:27 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 02 10:32:27 localhost systemd-udevd[457]: Using default interface naming scheme 'rhel-9.0'.
Feb 02 10:32:27 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 02 10:32:27 localhost systemd[1]: Starting dracut pre-trigger hook...
Feb 02 10:32:27 localhost systemd[1]: Finished dracut pre-trigger hook.
Feb 02 10:32:27 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 02 10:32:27 localhost systemd[1]: Created slice Slice /system/modprobe.
Feb 02 10:32:27 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 02 10:32:27 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 02 10:32:27 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 02 10:32:27 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 02 10:32:27 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 02 10:32:27 localhost systemd[1]: Reached target Network.
Feb 02 10:32:27 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 02 10:32:27 localhost systemd[1]: Starting dracut initqueue hook...
Feb 02 10:32:27 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb 02 10:32:27 localhost kernel: libata version 3.00 loaded.
Feb 02 10:32:27 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Feb 02 10:32:27 localhost kernel: scsi host0: ata_piix
Feb 02 10:32:27 localhost kernel: scsi host1: ata_piix
Feb 02 10:32:27 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb 02 10:32:27 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb 02 10:32:27 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb 02 10:32:27 localhost kernel:  vda: vda1
Feb 02 10:32:27 localhost kernel: ata1: found unknown device (class 0)
Feb 02 10:32:27 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 02 10:32:27 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 02 10:32:27 localhost systemd-udevd[493]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 10:32:27 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 02 10:32:27 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 02 10:32:27 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 02 10:32:27 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb 02 10:32:27 localhost systemd[1]: Reached target Initrd Root Device.
Feb 02 10:32:28 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Feb 02 10:32:28 localhost systemd[1]: Mounting Kernel Configuration File System...
Feb 02 10:32:28 localhost systemd[1]: Mounted Kernel Configuration File System.
Feb 02 10:32:28 localhost systemd[1]: Reached target System Initialization.
Feb 02 10:32:28 localhost systemd[1]: Reached target Basic System.
Feb 02 10:32:28 localhost systemd[1]: Finished dracut initqueue hook.
Feb 02 10:32:28 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Feb 02 10:32:28 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Feb 02 10:32:28 localhost systemd[1]: Reached target Remote File Systems.
Feb 02 10:32:28 localhost systemd[1]: Starting dracut pre-mount hook...
Feb 02 10:32:28 localhost systemd[1]: Finished dracut pre-mount hook.
Feb 02 10:32:28 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Feb 02 10:32:28 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Feb 02 10:32:28 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb 02 10:32:28 localhost systemd[1]: Mounting /sysroot...
Feb 02 10:32:28 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 02 10:32:28 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Feb 02 10:32:28 localhost kernel: XFS (vda1): Ending clean mount
Feb 02 10:32:28 localhost systemd[1]: Mounted /sysroot.
Feb 02 10:32:28 localhost systemd[1]: Reached target Initrd Root File System.
Feb 02 10:32:28 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 02 10:32:28 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 02 10:32:28 localhost systemd[1]: Reached target Initrd File Systems.
Feb 02 10:32:28 localhost systemd[1]: Reached target Initrd Default Target.
Feb 02 10:32:28 localhost systemd[1]: Starting dracut mount hook...
Feb 02 10:32:28 localhost systemd[1]: Finished dracut mount hook.
Feb 02 10:32:28 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 02 10:32:28 localhost rpc.idmapd[444]: exiting on signal 15
Feb 02 10:32:28 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 02 10:32:28 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 02 10:32:28 localhost systemd[1]: Stopped target Network.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Timer Units.
Feb 02 10:32:28 localhost systemd[1]: dbus.socket: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 02 10:32:28 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Initrd Default Target.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Basic System.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Initrd Root Device.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Initrd /usr File System.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Path Units.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Remote File Systems.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Slice Units.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Socket Units.
Feb 02 10:32:28 localhost systemd[1]: Stopped target System Initialization.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Local File Systems.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Swaps.
Feb 02 10:32:28 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped dracut mount hook.
Feb 02 10:32:28 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped dracut pre-mount hook.
Feb 02 10:32:28 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Feb 02 10:32:28 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 02 10:32:28 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped dracut initqueue hook.
Feb 02 10:32:28 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped Apply Kernel Variables.
Feb 02 10:32:28 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Feb 02 10:32:28 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped Coldplug All udev Devices.
Feb 02 10:32:28 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped dracut pre-trigger hook.
Feb 02 10:32:28 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 02 10:32:28 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped Setup Virtual Console.
Feb 02 10:32:28 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 02 10:32:28 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Closed udev Control Socket.
Feb 02 10:32:28 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Closed udev Kernel Socket.
Feb 02 10:32:28 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped dracut pre-udev hook.
Feb 02 10:32:28 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 02 10:32:28 localhost systemd[1]: Stopped dracut cmdline hook.
Feb 02 10:32:29 localhost systemd[1]: Starting Cleanup udev Database...
Feb 02 10:32:29 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 02 10:32:29 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Feb 02 10:32:29 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Stopped Create System Users.
Feb 02 10:32:29 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 02 10:32:29 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Finished Cleanup udev Database.
Feb 02 10:32:29 localhost systemd[1]: Reached target Switch Root.
Feb 02 10:32:29 localhost systemd[1]: Starting Switch Root...
Feb 02 10:32:29 localhost systemd[1]: Switching root.
Feb 02 10:32:29 localhost systemd-journald[304]: Journal stopped
Feb 02 10:32:29 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Feb 02 10:32:29 localhost kernel: audit: type=1404 audit(1770028349.213:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb 02 10:32:29 localhost kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 10:32:29 localhost kernel: SELinux:  policy capability open_perms=1
Feb 02 10:32:29 localhost kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 10:32:29 localhost kernel: SELinux:  policy capability always_check_network=0
Feb 02 10:32:29 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 10:32:29 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 10:32:29 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 10:32:29 localhost kernel: audit: type=1403 audit(1770028349.338:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 02 10:32:29 localhost systemd[1]: Successfully loaded SELinux policy in 128.894ms.
Feb 02 10:32:29 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 47.241ms.
Feb 02 10:32:29 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 02 10:32:29 localhost systemd[1]: Detected virtualization kvm.
Feb 02 10:32:29 localhost systemd[1]: Detected architecture x86-64.
Feb 02 10:32:29 localhost systemd-rc-local-generator[639]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 10:32:29 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Stopped Switch Root.
Feb 02 10:32:29 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 02 10:32:29 localhost systemd[1]: Created slice Slice /system/getty.
Feb 02 10:32:29 localhost systemd[1]: Created slice Slice /system/serial-getty.
Feb 02 10:32:29 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Feb 02 10:32:29 localhost systemd[1]: Created slice User and Session Slice.
Feb 02 10:32:29 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 02 10:32:29 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Feb 02 10:32:29 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb 02 10:32:29 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 02 10:32:29 localhost systemd[1]: Stopped target Switch Root.
Feb 02 10:32:29 localhost systemd[1]: Stopped target Initrd File Systems.
Feb 02 10:32:29 localhost systemd[1]: Stopped target Initrd Root File System.
Feb 02 10:32:29 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Feb 02 10:32:29 localhost systemd[1]: Reached target Path Units.
Feb 02 10:32:29 localhost systemd[1]: Reached target rpc_pipefs.target.
Feb 02 10:32:29 localhost systemd[1]: Reached target Slice Units.
Feb 02 10:32:29 localhost systemd[1]: Reached target Swaps.
Feb 02 10:32:29 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Feb 02 10:32:29 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Feb 02 10:32:29 localhost systemd[1]: Reached target RPC Port Mapper.
Feb 02 10:32:29 localhost systemd[1]: Listening on Process Core Dump Socket.
Feb 02 10:32:29 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Feb 02 10:32:29 localhost systemd[1]: Listening on udev Control Socket.
Feb 02 10:32:29 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 02 10:32:29 localhost systemd[1]: Mounting Huge Pages File System...
Feb 02 10:32:29 localhost systemd[1]: Mounting POSIX Message Queue File System...
Feb 02 10:32:29 localhost systemd[1]: Mounting Kernel Debug File System...
Feb 02 10:32:29 localhost systemd[1]: Mounting Kernel Trace File System...
Feb 02 10:32:29 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 02 10:32:29 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 02 10:32:29 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 02 10:32:29 localhost systemd[1]: Starting Load Kernel Module drm...
Feb 02 10:32:29 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Feb 02 10:32:29 localhost systemd[1]: Starting Load Kernel Module fuse...
Feb 02 10:32:29 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb 02 10:32:29 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Stopped File System Check on Root Device.
Feb 02 10:32:29 localhost systemd[1]: Stopped Journal Service.
Feb 02 10:32:29 localhost systemd[1]: Starting Journal Service...
Feb 02 10:32:29 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 02 10:32:29 localhost systemd[1]: Starting Generate network units from Kernel command line...
Feb 02 10:32:29 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 02 10:32:29 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Feb 02 10:32:29 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 02 10:32:29 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 02 10:32:29 localhost kernel: fuse: init (API version 7.37)
Feb 02 10:32:29 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 02 10:32:29 localhost systemd[1]: Mounted Huge Pages File System.
Feb 02 10:32:29 localhost systemd-journald[680]: Journal started
Feb 02 10:32:29 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb 02 10:32:29 localhost systemd[1]: Queued start job for default target Multi-User System.
Feb 02 10:32:29 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 02 10:32:29 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb 02 10:32:29 localhost systemd[1]: Started Journal Service.
Feb 02 10:32:29 localhost systemd[1]: Mounted POSIX Message Queue File System.
Feb 02 10:32:29 localhost systemd[1]: Mounted Kernel Debug File System.
Feb 02 10:32:29 localhost systemd[1]: Mounted Kernel Trace File System.
Feb 02 10:32:29 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 02 10:32:29 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 02 10:32:29 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Feb 02 10:32:29 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Finished Load Kernel Module fuse.
Feb 02 10:32:29 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb 02 10:32:29 localhost systemd[1]: Finished Generate network units from Kernel command line.
Feb 02 10:32:29 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Feb 02 10:32:29 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 02 10:32:29 localhost kernel: ACPI: bus type drm_connector registered
Feb 02 10:32:29 localhost systemd[1]: Mounting FUSE Control File System...
Feb 02 10:32:29 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 02 10:32:29 localhost systemd[1]: Starting Rebuild Hardware Database...
Feb 02 10:32:29 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Feb 02 10:32:29 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 02 10:32:29 localhost systemd[1]: Starting Load/Save OS Random Seed...
Feb 02 10:32:29 localhost systemd[1]: Starting Create System Users...
Feb 02 10:32:29 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 02 10:32:29 localhost systemd[1]: Finished Load Kernel Module drm.
Feb 02 10:32:29 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb 02 10:32:29 localhost systemd-journald[680]: Received client request to flush runtime journal.
Feb 02 10:32:29 localhost systemd[1]: Mounted FUSE Control File System.
Feb 02 10:32:29 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Feb 02 10:32:29 localhost systemd[1]: Finished Load/Save OS Random Seed.
Feb 02 10:32:29 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 02 10:32:29 localhost systemd[1]: Finished Create System Users.
Feb 02 10:32:29 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 02 10:32:30 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 02 10:32:30 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 02 10:32:30 localhost systemd[1]: Reached target Preparation for Local File Systems.
Feb 02 10:32:30 localhost systemd[1]: Reached target Local File Systems.
Feb 02 10:32:30 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb 02 10:32:30 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb 02 10:32:30 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 02 10:32:30 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb 02 10:32:30 localhost systemd[1]: Starting Automatic Boot Loader Update...
Feb 02 10:32:30 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb 02 10:32:30 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 02 10:32:30 localhost bootctl[698]: Couldn't find EFI system partition, skipping.
Feb 02 10:32:30 localhost systemd[1]: Finished Automatic Boot Loader Update.
Feb 02 10:32:30 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 02 10:32:30 localhost systemd[1]: Starting Security Auditing Service...
Feb 02 10:32:30 localhost systemd[1]: Starting RPC Bind...
Feb 02 10:32:30 localhost systemd[1]: Starting Rebuild Journal Catalog...
Feb 02 10:32:30 localhost auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb 02 10:32:30 localhost auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb 02 10:32:30 localhost systemd[1]: Finished Rebuild Journal Catalog.
Feb 02 10:32:30 localhost systemd[1]: Started RPC Bind.
Feb 02 10:32:30 localhost augenrules[709]: /sbin/augenrules: No change
Feb 02 10:32:30 localhost augenrules[724]: No rules
Feb 02 10:32:30 localhost augenrules[724]: enabled 1
Feb 02 10:32:30 localhost augenrules[724]: failure 1
Feb 02 10:32:30 localhost augenrules[724]: pid 704
Feb 02 10:32:30 localhost augenrules[724]: rate_limit 0
Feb 02 10:32:30 localhost augenrules[724]: backlog_limit 8192
Feb 02 10:32:30 localhost augenrules[724]: lost 0
Feb 02 10:32:30 localhost augenrules[724]: backlog 3
Feb 02 10:32:30 localhost augenrules[724]: backlog_wait_time 60000
Feb 02 10:32:30 localhost augenrules[724]: backlog_wait_time_actual 0
Feb 02 10:32:30 localhost augenrules[724]: enabled 1
Feb 02 10:32:30 localhost augenrules[724]: failure 1
Feb 02 10:32:30 localhost augenrules[724]: pid 704
Feb 02 10:32:30 localhost augenrules[724]: rate_limit 0
Feb 02 10:32:30 localhost augenrules[724]: backlog_limit 8192
Feb 02 10:32:30 localhost augenrules[724]: lost 0
Feb 02 10:32:30 localhost augenrules[724]: backlog 0
Feb 02 10:32:30 localhost augenrules[724]: backlog_wait_time 60000
Feb 02 10:32:30 localhost augenrules[724]: backlog_wait_time_actual 0
Feb 02 10:32:30 localhost augenrules[724]: enabled 1
Feb 02 10:32:30 localhost augenrules[724]: failure 1
Feb 02 10:32:30 localhost augenrules[724]: pid 704
Feb 02 10:32:30 localhost augenrules[724]: rate_limit 0
Feb 02 10:32:30 localhost augenrules[724]: backlog_limit 8192
Feb 02 10:32:30 localhost augenrules[724]: lost 0
Feb 02 10:32:30 localhost augenrules[724]: backlog 4
Feb 02 10:32:30 localhost augenrules[724]: backlog_wait_time 60000
Feb 02 10:32:30 localhost augenrules[724]: backlog_wait_time_actual 0
Feb 02 10:32:30 localhost systemd[1]: Started Security Auditing Service.
Feb 02 10:32:30 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb 02 10:32:30 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb 02 10:32:30 localhost systemd[1]: Finished Rebuild Hardware Database.
Feb 02 10:32:30 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 02 10:32:30 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb 02 10:32:30 localhost systemd[1]: Starting Update is Completed...
Feb 02 10:32:30 localhost systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Feb 02 10:32:30 localhost systemd[1]: Finished Update is Completed.
Feb 02 10:32:30 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 02 10:32:30 localhost systemd[1]: Reached target System Initialization.
Feb 02 10:32:30 localhost systemd[1]: Started dnf makecache --timer.
Feb 02 10:32:30 localhost systemd[1]: Started Daily rotation of log files.
Feb 02 10:32:30 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb 02 10:32:30 localhost systemd[1]: Reached target Timer Units.
Feb 02 10:32:30 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 02 10:32:30 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb 02 10:32:30 localhost systemd[1]: Reached target Socket Units.
Feb 02 10:32:30 localhost systemd[1]: Starting D-Bus System Message Bus...
Feb 02 10:32:30 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 02 10:32:30 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb 02 10:32:30 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 02 10:32:30 localhost systemd-udevd[743]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 10:32:30 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 02 10:32:30 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 02 10:32:30 localhost systemd[1]: Started D-Bus System Message Bus.
Feb 02 10:32:30 localhost systemd[1]: Reached target Basic System.
Feb 02 10:32:30 localhost dbus-broker-lau[769]: Ready
Feb 02 10:32:30 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb 02 10:32:30 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb 02 10:32:30 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb 02 10:32:30 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb 02 10:32:30 localhost systemd[1]: Starting NTP client/server...
Feb 02 10:32:30 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb 02 10:32:30 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb 02 10:32:30 localhost systemd[1]: Starting IPv4 firewall with iptables...
Feb 02 10:32:30 localhost systemd[1]: Started irqbalance daemon.
Feb 02 10:32:30 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb 02 10:32:30 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 10:32:30 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 10:32:30 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 10:32:30 localhost systemd[1]: Reached target sshd-keygen.target.
Feb 02 10:32:30 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb 02 10:32:30 localhost systemd[1]: Reached target User and Group Name Lookups.
Feb 02 10:32:30 localhost chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 02 10:32:30 localhost chronyd[795]: Loaded 0 symmetric keys
Feb 02 10:32:30 localhost chronyd[795]: Using right/UTC timezone to obtain leap second data
Feb 02 10:32:30 localhost chronyd[795]: Loaded seccomp filter (level 2)
Feb 02 10:32:30 localhost systemd[1]: Starting User Login Management...
Feb 02 10:32:30 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 02 10:32:30 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 02 10:32:30 localhost systemd[1]: Started NTP client/server.
Feb 02 10:32:30 localhost kernel: Console: switching to colour dummy device 80x25
Feb 02 10:32:30 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 02 10:32:30 localhost kernel: [drm] features: -context_init
Feb 02 10:32:30 localhost kernel: [drm] number of scanouts: 1
Feb 02 10:32:30 localhost kernel: [drm] number of cap sets: 0
Feb 02 10:32:30 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb 02 10:32:30 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb 02 10:32:30 localhost systemd-logind[793]: New seat seat0.
Feb 02 10:32:30 localhost systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 02 10:32:30 localhost systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 02 10:32:30 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 02 10:32:30 localhost kernel: Console: switching to colour frame buffer device 128x48
Feb 02 10:32:30 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 02 10:32:30 localhost systemd[1]: Started User Login Management.
Feb 02 10:32:30 localhost kernel: kvm_amd: TSC scaling supported
Feb 02 10:32:30 localhost kernel: kvm_amd: Nested Virtualization enabled
Feb 02 10:32:30 localhost kernel: kvm_amd: Nested Paging enabled
Feb 02 10:32:30 localhost kernel: kvm_amd: LBR virtualization supported
Feb 02 10:32:30 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb 02 10:32:30 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb 02 10:32:31 localhost iptables.init[783]: iptables: Applying firewall rules: [  OK  ]
Feb 02 10:32:31 localhost systemd[1]: Finished IPv4 firewall with iptables.
Feb 02 10:32:31 localhost cloud-init[840]: Cloud-init v. 24.4-8.el9 running 'init-local' at Mon, 02 Feb 2026 10:32:31 +0000. Up 5.81 seconds.
Feb 02 10:32:31 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Feb 02 10:32:31 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Feb 02 10:32:31 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp6s7b6p_s.mount: Deactivated successfully.
Feb 02 10:32:31 localhost systemd[1]: Starting Hostname Service...
Feb 02 10:32:31 localhost systemd[1]: Started Hostname Service.
Feb 02 10:32:31 np0005604929.novalocal systemd-hostnamed[854]: Hostname set to <np0005604929.novalocal> (static)
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Reached target Preparation for Network.
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Starting Network Manager...
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1448] NetworkManager (version 1.54.3-2.el9) is starting... (boot:d0c6c7d7-6431-45ce-a025-35dcf8d61f8d)
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1453] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1585] manager[0x55c8744f2000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1623] hostname: hostname: using hostnamed
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1624] hostname: static hostname changed from (none) to "np0005604929.novalocal"
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1629] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1747] manager[0x55c8744f2000]: rfkill: Wi-Fi hardware radio set enabled
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1749] manager[0x55c8744f2000]: rfkill: WWAN hardware radio set enabled
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1830] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1831] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1831] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1832] manager: Networking is enabled by state file
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1833] settings: Loaded settings plugin: keyfile (internal)
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1865] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1889] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1905] dhcp: init: Using DHCP client 'internal'
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1909] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1923] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1933] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1945] device (lo): Activation: starting connection 'lo' (4fdf3fa4-43de-4352-8e3d-ab6325fd58e4)
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1954] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1957] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1985] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1990] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1992] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1994] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1995] device (eth0): carrier: link connected
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.1997] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2002] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Started Network Manager.
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2020] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2024] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2025] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2026] manager: NetworkManager state is now CONNECTING
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2028] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Reached target Network.
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2033] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2035] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2179] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2181] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 02 10:32:32 np0005604929.novalocal NetworkManager[858]: <info>  [1770028352.2186] device (lo): Activation: successful, device activated.
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Reached target NFS client services.
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: Reached target Remote File Systems.
Feb 02 10:32:32 np0005604929.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 02 10:32:33 np0005604929.novalocal NetworkManager[858]: <info>  [1770028353.8688] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Feb 02 10:32:33 np0005604929.novalocal NetworkManager[858]: <info>  [1770028353.8706] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 02 10:32:33 np0005604929.novalocal NetworkManager[858]: <info>  [1770028353.8738] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 10:32:33 np0005604929.novalocal NetworkManager[858]: <info>  [1770028353.8774] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 10:32:33 np0005604929.novalocal NetworkManager[858]: <info>  [1770028353.8777] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 10:32:33 np0005604929.novalocal NetworkManager[858]: <info>  [1770028353.8782] manager: NetworkManager state is now CONNECTED_SITE
Feb 02 10:32:33 np0005604929.novalocal NetworkManager[858]: <info>  [1770028353.8787] device (eth0): Activation: successful, device activated.
Feb 02 10:32:33 np0005604929.novalocal NetworkManager[858]: <info>  [1770028353.8796] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 02 10:32:33 np0005604929.novalocal NetworkManager[858]: <info>  [1770028353.8800] manager: startup complete
Feb 02 10:32:33 np0005604929.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 02 10:32:33 np0005604929.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: Cloud-init v. 24.4-8.el9 running 'init' at Mon, 02 Feb 2026 10:32:34 +0000. Up 8.46 seconds.
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: |  eth0  | True |        38.102.83.181         | 255.255.255.0 | global | fa:16:3e:34:ed:41 |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe34:ed41/64 |       .       |  link  | fa:16:3e:34:ed:41 |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Feb 02 10:32:34 np0005604929.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 02 10:32:35 np0005604929.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Feb 02 10:32:35 np0005604929.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Feb 02 10:32:35 np0005604929.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Feb 02 10:32:35 np0005604929.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Feb 02 10:32:35 np0005604929.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Feb 02 10:32:35 np0005604929.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: Generating public/private rsa key pair.
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: The key fingerprint is:
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: SHA256:5bAaMy2fjaAlIrrAaFZ0jWjDL8GHQzSp2kQJ6QqpBS8 root@np0005604929.novalocal
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: The key's randomart image is:
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: +---[RSA 3072]----+
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |.o.=.            |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |o *.+ o          |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |.=.@ + .. .      |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |Eo* B  . =       |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |*=.o..B S .      |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |B.o..+ O +       |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |++  . . + .      |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |+.               |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |.                |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: +----[SHA256]-----+
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: The key fingerprint is:
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: SHA256:Lru6ujgMZgE6uGtk4ZyUNVtaeR5FYLEfBQt/q6Cyx6s root@np0005604929.novalocal
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: The key's randomart image is:
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: +---[ECDSA 256]---+
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |      .==+..     |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |.  o +.o+ o      |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |+ o * o..+ .     |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |+= o   .. o .    |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |+o+    .S. .     |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |oO    ... .      |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |B. .... ..       |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |o+  oo o         |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |o.oE=++.         |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: +----[SHA256]-----+
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: The key fingerprint is:
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: SHA256:W7oM9XfZSgymfLSgMZE1BR4h9rxLF5yn4gYIKpsBzG0 root@np0005604929.novalocal
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: The key's randomart image is:
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: +--[ED25519 256]--+
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |        o *+.    |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |o .    . B + .   |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |.o E.   o + + .  |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |. .. . . . . +   |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |o .   . S * *    |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: | =     . & O + o |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |o     . + B + = .|
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |       o o o o . |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: |        o     .  |
Feb 02 10:32:35 np0005604929.novalocal cloud-init[922]: +----[SHA256]-----+
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Reached target Cloud-config availability.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Reached target Network is Online.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Starting Crash recovery kernel arming...
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Starting System Logging Service...
Feb 02 10:32:35 np0005604929.novalocal sm-notify[1005]: Version 2.5.4 starting
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Starting OpenSSH server daemon...
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Starting Permit User Sessions...
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Started Notify NFS peers of a restart.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Finished Permit User Sessions.
Feb 02 10:32:35 np0005604929.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Feb 02 10:32:35 np0005604929.novalocal sshd[1007]: Server listening on :: port 22.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Started OpenSSH server daemon.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Started Command Scheduler.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Started Getty on tty1.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Started Serial Getty on ttyS0.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Reached target Login Prompts.
Feb 02 10:32:35 np0005604929.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Feb 02 10:32:35 np0005604929.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Feb 02 10:32:35 np0005604929.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 9% if used.)
Feb 02 10:32:35 np0005604929.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Feb 02 10:32:35 np0005604929.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Feb 02 10:32:35 np0005604929.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Started System Logging Service.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Reached target Multi-User System.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Feb 02 10:32:35 np0005604929.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 10:32:35 np0005604929.novalocal kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Feb 02 10:32:35 np0005604929.novalocal kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Feb 02 10:32:35 np0005604929.novalocal cloud-init[1158]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Mon, 02 Feb 2026 10:32:35 +0000. Up 10.04 seconds.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Feb 02 10:32:35 np0005604929.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Feb 02 10:32:35 np0005604929.novalocal dracut[1266]: dracut-057-102.git20250818.el9
Feb 02 10:32:35 np0005604929.novalocal dracut[1268]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Feb 02 10:32:36 np0005604929.novalocal cloud-init[1345]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Mon, 02 Feb 2026 10:32:36 +0000. Up 10.44 seconds.
Feb 02 10:32:36 np0005604929.novalocal cloud-init[1365]: #############################################################
Feb 02 10:32:36 np0005604929.novalocal cloud-init[1366]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb 02 10:32:36 np0005604929.novalocal cloud-init[1371]: 256 SHA256:Lru6ujgMZgE6uGtk4ZyUNVtaeR5FYLEfBQt/q6Cyx6s root@np0005604929.novalocal (ECDSA)
Feb 02 10:32:36 np0005604929.novalocal cloud-init[1373]: 256 SHA256:W7oM9XfZSgymfLSgMZE1BR4h9rxLF5yn4gYIKpsBzG0 root@np0005604929.novalocal (ED25519)
Feb 02 10:32:36 np0005604929.novalocal cloud-init[1378]: 3072 SHA256:5bAaMy2fjaAlIrrAaFZ0jWjDL8GHQzSp2kQJ6QqpBS8 root@np0005604929.novalocal (RSA)
Feb 02 10:32:36 np0005604929.novalocal cloud-init[1379]: -----END SSH HOST KEY FINGERPRINTS-----
Feb 02 10:32:36 np0005604929.novalocal cloud-init[1380]: #############################################################
Feb 02 10:32:36 np0005604929.novalocal cloud-init[1345]: Cloud-init v. 24.4-8.el9 finished at Mon, 02 Feb 2026 10:32:36 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.58 seconds
Feb 02 10:32:36 np0005604929.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Feb 02 10:32:36 np0005604929.novalocal systemd[1]: Reached target Cloud-init target.
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: Module 'resume' will not be installed, because it's in the list to be omitted!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: memstrack is not available
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 02 10:32:36 np0005604929.novalocal sshd-session[1731]: Unable to negotiate with 38.102.83.114 port 47056: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 02 10:32:36 np0005604929.novalocal sshd-session[1742]: Connection reset by 38.102.83.114 port 47066 [preauth]
Feb 02 10:32:36 np0005604929.novalocal sshd-session[1792]: Unable to negotiate with 38.102.83.114 port 47080: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Feb 02 10:32:36 np0005604929.novalocal sshd-session[1803]: Unable to negotiate with 38.102.83.114 port 47092: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Feb 02 10:32:36 np0005604929.novalocal sshd-session[1712]: Connection closed by 38.102.83.114 port 36926 [preauth]
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 02 10:32:36 np0005604929.novalocal sshd-session[1813]: Connection reset by 38.102.83.114 port 47108 [preauth]
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 02 10:32:36 np0005604929.novalocal sshd-session[1830]: Connection reset by 38.102.83.114 port 47122 [preauth]
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: memstrack is not available
Feb 02 10:32:36 np0005604929.novalocal dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 02 10:32:36 np0005604929.novalocal sshd-session[1844]: Unable to negotiate with 38.102.83.114 port 47138: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Feb 02 10:32:36 np0005604929.novalocal sshd-session[1852]: Unable to negotiate with 38.102.83.114 port 47146: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Feb 02 10:32:37 np0005604929.novalocal dracut[1268]: *** Including module: systemd ***
Feb 02 10:32:37 np0005604929.novalocal dracut[1268]: *** Including module: fips ***
Feb 02 10:32:37 np0005604929.novalocal dracut[1268]: *** Including module: systemd-initrd ***
Feb 02 10:32:37 np0005604929.novalocal dracut[1268]: *** Including module: i18n ***
Feb 02 10:32:37 np0005604929.novalocal dracut[1268]: *** Including module: drm ***
Feb 02 10:32:37 np0005604929.novalocal dracut[1268]: *** Including module: prefixdevname ***
Feb 02 10:32:37 np0005604929.novalocal dracut[1268]: *** Including module: kernel-modules ***
Feb 02 10:32:37 np0005604929.novalocal kernel: block vda: the capability attribute has been deprecated.
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: kernel-modules-extra ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: qemu ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: fstab-sys ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: rootfs-block ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: terminfo ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: udev-rules ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: Skipping udev rule: 91-permissions.rules
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: Skipping udev rule: 80-drivers-modprobe.rules
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: virtiofs ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: dracut-systemd ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: usrmount ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: base ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: fs-lib ***
Feb 02 10:32:38 np0005604929.novalocal chronyd[795]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Feb 02 10:32:38 np0005604929.novalocal chronyd[795]: System clock TAI offset set to 37 seconds
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: kdumpbase ***
Feb 02 10:32:38 np0005604929.novalocal dracut[1268]: *** Including module: microcode_ctl-fw_dir_override ***
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:   microcode_ctl module: mangling fw_dir
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: configuration "intel" is ignored
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]: *** Including module: openssl ***
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]: *** Including module: shutdown ***
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]: *** Including module: squash ***
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]: *** Including modules done ***
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]: *** Installing kernel module dependencies ***
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]: *** Installing kernel module dependencies done ***
Feb 02 10:32:39 np0005604929.novalocal dracut[1268]: *** Resolving executable dependencies ***
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: Cannot change IRQ 35 affinity: Operation not permitted
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: IRQ 35 affinity is now unmanaged
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: Cannot change IRQ 25 affinity: Operation not permitted
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: IRQ 25 affinity is now unmanaged
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: Cannot change IRQ 33 affinity: Operation not permitted
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: IRQ 33 affinity is now unmanaged
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: IRQ 31 affinity is now unmanaged
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: IRQ 28 affinity is now unmanaged
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: Cannot change IRQ 26 affinity: Operation not permitted
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: IRQ 26 affinity is now unmanaged
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: Cannot change IRQ 34 affinity: Operation not permitted
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: IRQ 34 affinity is now unmanaged
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Feb 02 10:32:40 np0005604929.novalocal irqbalance[786]: IRQ 32 affinity is now unmanaged
Feb 02 10:32:41 np0005604929.novalocal dracut[1268]: *** Resolving executable dependencies done ***
Feb 02 10:32:41 np0005604929.novalocal dracut[1268]: *** Generating early-microcode cpio image ***
Feb 02 10:32:41 np0005604929.novalocal dracut[1268]: *** Store current command line parameters ***
Feb 02 10:32:41 np0005604929.novalocal dracut[1268]: Stored kernel commandline:
Feb 02 10:32:41 np0005604929.novalocal dracut[1268]: No dracut internal kernel commandline stored in the initramfs
Feb 02 10:32:41 np0005604929.novalocal dracut[1268]: *** Install squash loader ***
Feb 02 10:32:42 np0005604929.novalocal dracut[1268]: *** Squashing the files inside the initramfs ***
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: *** Squashing the files inside the initramfs done ***
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: *** Hardlinking files ***
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: Mode:           real
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: Files:          50
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: Linked:         0 files
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: Compared:       0 xattrs
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: Compared:       0 files
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: Saved:          0 B
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: Duration:       0.000202 seconds
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: *** Hardlinking files done ***
Feb 02 10:32:43 np0005604929.novalocal dracut[1268]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Feb 02 10:32:43 np0005604929.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 10:32:44 np0005604929.novalocal kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Feb 02 10:32:44 np0005604929.novalocal kdumpctl[1015]: kdump: Starting kdump: [OK]
Feb 02 10:32:44 np0005604929.novalocal systemd[1]: Finished Crash recovery kernel arming.
Feb 02 10:32:44 np0005604929.novalocal systemd[1]: Startup finished in 1.151s (kernel) + 2.377s (initrd) + 14.810s (userspace) = 18.339s.
Feb 02 10:33:02 np0005604929.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 02 10:33:06 np0005604929.novalocal sshd-session[4305]: Accepted publickey for zuul from 38.102.83.114 port 33978 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Feb 02 10:33:06 np0005604929.novalocal systemd[1]: Created slice User Slice of UID 1000.
Feb 02 10:33:06 np0005604929.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb 02 10:33:06 np0005604929.novalocal systemd-logind[793]: New session 1 of user zuul.
Feb 02 10:33:06 np0005604929.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb 02 10:33:06 np0005604929.novalocal systemd[1]: Starting User Manager for UID 1000...
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Queued start job for default target Main User Target.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Created slice User Application Slice.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Reached target Paths.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Reached target Timers.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Starting D-Bus User Message Bus Socket...
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Starting Create User's Volatile Files and Directories...
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Listening on D-Bus User Message Bus Socket.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Reached target Sockets.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Finished Create User's Volatile Files and Directories.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Reached target Basic System.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Reached target Main User Target.
Feb 02 10:33:06 np0005604929.novalocal systemd[4309]: Startup finished in 147ms.
Feb 02 10:33:06 np0005604929.novalocal systemd[1]: Started User Manager for UID 1000.
Feb 02 10:33:06 np0005604929.novalocal systemd[1]: Started Session 1 of User zuul.
Feb 02 10:33:06 np0005604929.novalocal sshd-session[4305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 10:33:07 np0005604929.novalocal python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 10:33:09 np0005604929.novalocal python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 10:33:17 np0005604929.novalocal python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 10:33:18 np0005604929.novalocal python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb 02 10:33:20 np0005604929.novalocal python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDXGgrKz1YSPmor1u95csajTqFj+hZXJTxzvYqc0jGN7G97KRyXxKwXraYNpqCkIyks3qSCLDAicJ1S+MK/rJvI0KhxJ8aMx8f17FOuO8C6+3FAa/FUu4TH9ESEVM9Po8Xe7lyuf7EVuMW3NLv+WrduuDuyLt7T26ASSP6KAyLxkYarj7e9PXbcdJW3aobSIT8Pbrk+zgzIBoWRbNUJi0ZVCrFLbyvdAWuFyJsKduYERuRkgkeoAiiewYjsDt8GU3fnn4walyxYAz6Ye9gMOM1adIRUcPvPU2Wcfgf2cS6d5KZt0hRMIrANWAcsKXge3LFkWykKfQLtqGudbXFaX6VcGEk3XkVdqxfAEITD2U82Gq0xnBwarnKZ7AvjTJGpiMvatngY2494CQ7mEjOja9IwJQ0QGU1eRal49jZ0hqRCRgXV6/l1zrG+j09ey1zeklziU51fuGZ4fu8NK19zpewSynC9W7NHQkOKCUpK6i3J4D8TwKJc334lc6iDNifsI0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:21 np0005604929.novalocal python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:21 np0005604929.novalocal python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:33:22 np0005604929.novalocal python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770028401.6212404-251-121896901142519/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=d2fadb474937489e8299d300506df27a_id_rsa follow=False checksum=3981dd1f325e87d355da38379edf51c1c5594579 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:22 np0005604929.novalocal python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:33:23 np0005604929.novalocal python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770028402.5420837-306-81028473096409/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=d2fadb474937489e8299d300506df27a_id_rsa.pub follow=False checksum=4b2c2816d150dfa724bd8ae4d8811c8cf391fd6e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:24 np0005604929.novalocal python3[4979]: ansible-ping Invoked with data=pong
Feb 02 10:33:25 np0005604929.novalocal python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 10:33:28 np0005604929.novalocal python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb 02 10:33:29 np0005604929.novalocal python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:29 np0005604929.novalocal python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:29 np0005604929.novalocal python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:30 np0005604929.novalocal python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:30 np0005604929.novalocal python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:30 np0005604929.novalocal python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:32 np0005604929.novalocal sudo[5237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tynfnkotwgdlmsolduzvxhfptmaspjhb ; /usr/bin/python3'
Feb 02 10:33:32 np0005604929.novalocal sudo[5237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:33:32 np0005604929.novalocal python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:32 np0005604929.novalocal sudo[5237]: pam_unix(sudo:session): session closed for user root
Feb 02 10:33:33 np0005604929.novalocal sudo[5315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuotfykscndzocipwkmxkkstzohznbdc ; /usr/bin/python3'
Feb 02 10:33:33 np0005604929.novalocal sudo[5315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:33:33 np0005604929.novalocal python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:33:33 np0005604929.novalocal sudo[5315]: pam_unix(sudo:session): session closed for user root
Feb 02 10:33:33 np0005604929.novalocal sudo[5388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmfmernbncnlazquxjdezfbqdrqajcde ; /usr/bin/python3'
Feb 02 10:33:33 np0005604929.novalocal sudo[5388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:33:33 np0005604929.novalocal python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1770028412.9278114-31-243536765948392/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:33 np0005604929.novalocal sudo[5388]: pam_unix(sudo:session): session closed for user root
Feb 02 10:33:34 np0005604929.novalocal python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:34 np0005604929.novalocal python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:34 np0005604929.novalocal python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:35 np0005604929.novalocal python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:35 np0005604929.novalocal python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:35 np0005604929.novalocal python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:35 np0005604929.novalocal python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:36 np0005604929.novalocal python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:36 np0005604929.novalocal python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:36 np0005604929.novalocal python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:36 np0005604929.novalocal python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:37 np0005604929.novalocal python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:37 np0005604929.novalocal python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:37 np0005604929.novalocal python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:37 np0005604929.novalocal python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:38 np0005604929.novalocal python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:38 np0005604929.novalocal python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:38 np0005604929.novalocal python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:39 np0005604929.novalocal python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:39 np0005604929.novalocal python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:39 np0005604929.novalocal python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:39 np0005604929.novalocal python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:40 np0005604929.novalocal python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:40 np0005604929.novalocal python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:40 np0005604929.novalocal python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:40 np0005604929.novalocal python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:33:44 np0005604929.novalocal sudo[6062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhgojnvqroviddrihbwmjmxrkkkqjiod ; /usr/bin/python3'
Feb 02 10:33:44 np0005604929.novalocal sudo[6062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:33:44 np0005604929.novalocal python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 02 10:33:44 np0005604929.novalocal systemd[1]: Starting Time & Date Service...
Feb 02 10:33:44 np0005604929.novalocal systemd[1]: Started Time & Date Service.
Feb 02 10:33:44 np0005604929.novalocal systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Feb 02 10:33:44 np0005604929.novalocal sudo[6062]: pam_unix(sudo:session): session closed for user root
Feb 02 10:33:45 np0005604929.novalocal sudo[6093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cckdfqcjsehgloheztuklosfzgnbzpii ; /usr/bin/python3'
Feb 02 10:33:45 np0005604929.novalocal sudo[6093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:33:45 np0005604929.novalocal python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:45 np0005604929.novalocal sudo[6093]: pam_unix(sudo:session): session closed for user root
Feb 02 10:33:45 np0005604929.novalocal python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:33:46 np0005604929.novalocal python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1770028425.6829226-251-205010887597971/source _original_basename=tmp19diav48 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:46 np0005604929.novalocal python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:33:47 np0005604929.novalocal python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1770028426.6902049-301-63044996067162/source _original_basename=tmpfomty2hi follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:47 np0005604929.novalocal sudo[6513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvmbtfubcuwlykyhlhhayhznczcxyiok ; /usr/bin/python3'
Feb 02 10:33:47 np0005604929.novalocal sudo[6513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:33:48 np0005604929.novalocal python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:33:48 np0005604929.novalocal sudo[6513]: pam_unix(sudo:session): session closed for user root
Feb 02 10:33:48 np0005604929.novalocal sudo[6586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wheylfwkubdfclqtwitvrroalorzmxah ; /usr/bin/python3'
Feb 02 10:33:48 np0005604929.novalocal sudo[6586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:33:48 np0005604929.novalocal python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1770028427.8388624-381-113334362947231/source _original_basename=tmpd4dt12lf follow=False checksum=342f501e01c1098669fc1f1874ec75e7ad7dd27a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:48 np0005604929.novalocal sudo[6586]: pam_unix(sudo:session): session closed for user root
Feb 02 10:33:49 np0005604929.novalocal python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:33:49 np0005604929.novalocal python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:33:49 np0005604929.novalocal sudo[6740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zllokzvdrxhswmubwhcmtjzbszeunyhf ; /usr/bin/python3'
Feb 02 10:33:49 np0005604929.novalocal sudo[6740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:33:49 np0005604929.novalocal python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:33:49 np0005604929.novalocal sudo[6740]: pam_unix(sudo:session): session closed for user root
Feb 02 10:33:49 np0005604929.novalocal sudo[6813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etobtaniaauydjrwyumknbhmcliizczt ; /usr/bin/python3'
Feb 02 10:33:49 np0005604929.novalocal sudo[6813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:33:50 np0005604929.novalocal python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1770028429.548459-451-206195808849843/source _original_basename=tmpu8ibpegc follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:33:50 np0005604929.novalocal sudo[6813]: pam_unix(sudo:session): session closed for user root
Feb 02 10:33:50 np0005604929.novalocal sudo[6864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkiqtdkurtypeynmtdjcfryebmjddooq ; /usr/bin/python3'
Feb 02 10:33:50 np0005604929.novalocal sudo[6864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:33:50 np0005604929.novalocal python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-5c28-15a2-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:33:50 np0005604929.novalocal sudo[6864]: pam_unix(sudo:session): session closed for user root
Feb 02 10:33:51 np0005604929.novalocal python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-5c28-15a2-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb 02 10:33:52 np0005604929.novalocal python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:34:09 np0005604929.novalocal sshd-session[6923]: Invalid user solana from 80.94.92.186 port 43308
Feb 02 10:34:09 np0005604929.novalocal sshd-session[6923]: Connection closed by invalid user solana 80.94.92.186 port 43308 [preauth]
Feb 02 10:34:10 np0005604929.novalocal sudo[6948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iasenjstuhfxjzngaloyxmsuaiawxrei ; /usr/bin/python3'
Feb 02 10:34:10 np0005604929.novalocal sudo[6948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:34:10 np0005604929.novalocal python3[6950]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:34:10 np0005604929.novalocal sudo[6948]: pam_unix(sudo:session): session closed for user root
Feb 02 10:34:14 np0005604929.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 02 10:34:47 np0005604929.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 02 10:34:47 np0005604929.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb 02 10:34:47 np0005604929.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb 02 10:34:47 np0005604929.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb 02 10:34:47 np0005604929.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb 02 10:34:47 np0005604929.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb 02 10:34:47 np0005604929.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb 02 10:34:47 np0005604929.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb 02 10:34:47 np0005604929.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb 02 10:34:47 np0005604929.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9490] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 02 10:34:47 np0005604929.novalocal systemd-udevd[6953]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9692] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9725] settings: (eth1): created default wired connection 'Wired connection 1'
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9729] device (eth1): carrier: link connected
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9732] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9739] policy: auto-activating connection 'Wired connection 1' (bc408819-069b-306f-bf8d-84b09cc827a7)
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9745] device (eth1): Activation: starting connection 'Wired connection 1' (bc408819-069b-306f-bf8d-84b09cc827a7)
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9746] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9751] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9759] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 10:34:47 np0005604929.novalocal NetworkManager[858]: <info>  [1770028487.9769] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 02 10:34:48 np0005604929.novalocal python3[6980]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-b64e-0b60-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:34:58 np0005604929.novalocal sudo[7058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfxwvsiiosehtxcmtopfphnpenchttdi ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 02 10:34:58 np0005604929.novalocal sudo[7058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:34:58 np0005604929.novalocal python3[7060]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:34:58 np0005604929.novalocal sudo[7058]: pam_unix(sudo:session): session closed for user root
Feb 02 10:34:59 np0005604929.novalocal sudo[7131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybtjbsrlopvsupwclrydbmkwynxeggbp ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 02 10:34:59 np0005604929.novalocal sudo[7131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:34:59 np0005604929.novalocal python3[7133]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770028498.7315118-104-162191450602497/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=5ee1ddd5cf6ce54e8efe7f602da60627567e509c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:34:59 np0005604929.novalocal sudo[7131]: pam_unix(sudo:session): session closed for user root
Feb 02 10:34:59 np0005604929.novalocal sudo[7181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phtqtyujbekhljminwhmzmfrtpfeeaka ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 02 10:34:59 np0005604929.novalocal sudo[7181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:35:00 np0005604929.novalocal python3[7183]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Stopped Network Manager Wait Online.
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Stopping Network Manager Wait Online...
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Stopping Network Manager...
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[858]: <info>  [1770028500.0536] caught SIGTERM, shutting down normally.
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[858]: <info>  [1770028500.0548] dhcp4 (eth0): canceled DHCP transaction
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[858]: <info>  [1770028500.0549] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[858]: <info>  [1770028500.0549] dhcp4 (eth0): state changed no lease
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[858]: <info>  [1770028500.0551] manager: NetworkManager state is now CONNECTING
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[858]: <info>  [1770028500.0628] dhcp4 (eth1): canceled DHCP transaction
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[858]: <info>  [1770028500.0628] dhcp4 (eth1): state changed no lease
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[858]: <info>  [1770028500.0674] exiting (success)
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Stopped Network Manager.
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: NetworkManager.service: Consumed 1.255s CPU time, 10.3M memory peak.
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Starting Network Manager...
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1098] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:d0c6c7d7-6431-45ce-a025-35dcf8d61f8d)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1101] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1152] manager[0x55c592758000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Starting Hostname Service...
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Started Hostname Service.
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1916] hostname: hostname: using hostnamed
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1916] hostname: static hostname changed from (none) to "np0005604929.novalocal"
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1922] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1928] manager[0x55c592758000]: rfkill: Wi-Fi hardware radio set enabled
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1928] manager[0x55c592758000]: rfkill: WWAN hardware radio set enabled
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1957] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1957] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1958] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1959] manager: Networking is enabled by state file
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1961] settings: Loaded settings plugin: keyfile (internal)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1965] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.1994] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2003] dhcp: init: Using DHCP client 'internal'
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2006] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2012] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2018] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2028] device (lo): Activation: starting connection 'lo' (4fdf3fa4-43de-4352-8e3d-ab6325fd58e4)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2034] device (eth0): carrier: link connected
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2039] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2045] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2045] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2052] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2059] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2064] device (eth1): carrier: link connected
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2067] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2072] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (bc408819-069b-306f-bf8d-84b09cc827a7) (indicated)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2072] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2077] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2084] device (eth1): Activation: starting connection 'Wired connection 1' (bc408819-069b-306f-bf8d-84b09cc827a7)
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Started Network Manager.
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2090] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2095] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2098] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2100] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2103] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2107] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2110] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2112] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2115] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2125] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2129] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2138] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2142] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2167] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2169] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.2176] device (lo): Activation: successful, device activated.
Feb 02 10:35:00 np0005604929.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 02 10:35:00 np0005604929.novalocal sudo[7181]: pam_unix(sudo:session): session closed for user root
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.3838] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.3846] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.3911] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.3941] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.3943] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.3949] manager: NetworkManager state is now CONNECTED_SITE
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.3952] device (eth0): Activation: successful, device activated.
Feb 02 10:35:00 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028500.3958] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 02 10:35:00 np0005604929.novalocal python3[7248]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-b64e-0b60-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:35:10 np0005604929.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 10:35:23 np0005604929.novalocal sshd-session[7270]: Invalid user solana from 80.94.92.168 port 36190
Feb 02 10:35:23 np0005604929.novalocal sshd-session[7270]: Connection closed by invalid user solana 80.94.92.168 port 36190 [preauth]
Feb 02 10:35:30 np0005604929.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.6739] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 02 10:35:45 np0005604929.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 10:35:45 np0005604929.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7013] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7016] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7023] device (eth1): Activation: successful, device activated.
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7029] manager: startup complete
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7030] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <warn>  [1770028545.7035] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7045] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb 02 10:35:45 np0005604929.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7188] dhcp4 (eth1): canceled DHCP transaction
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7189] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7190] dhcp4 (eth1): state changed no lease
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7204] policy: auto-activating connection 'ci-private-network' (197a2725-4d03-536f-a6da-d1aac3072b16)
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7208] device (eth1): Activation: starting connection 'ci-private-network' (197a2725-4d03-536f-a6da-d1aac3072b16)
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7209] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7211] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7218] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7225] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7271] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7273] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 10:35:45 np0005604929.novalocal NetworkManager[7191]: <info>  [1770028545.7278] device (eth1): Activation: successful, device activated.
Feb 02 10:35:52 np0005604929.novalocal systemd[4309]: Starting Mark boot as successful...
Feb 02 10:35:52 np0005604929.novalocal systemd[4309]: Finished Mark boot as successful.
Feb 02 10:35:55 np0005604929.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 10:35:57 np0005604929.novalocal sshd-session[7298]: Received disconnect from 203.83.238.251 port 44278:11:  [preauth]
Feb 02 10:35:57 np0005604929.novalocal sshd-session[7298]: Disconnected from authenticating user root 203.83.238.251 port 44278 [preauth]
Feb 02 10:36:00 np0005604929.novalocal sshd-session[4318]: Received disconnect from 38.102.83.114 port 33978:11: disconnected by user
Feb 02 10:36:00 np0005604929.novalocal sshd-session[4318]: Disconnected from user zuul 38.102.83.114 port 33978
Feb 02 10:36:00 np0005604929.novalocal sshd-session[4305]: pam_unix(sshd:session): session closed for user zuul
Feb 02 10:36:00 np0005604929.novalocal systemd-logind[793]: Session 1 logged out. Waiting for processes to exit.
Feb 02 10:37:08 np0005604929.novalocal sshd-session[7300]: Accepted publickey for zuul from 38.102.83.114 port 57966 ssh2: RSA SHA256:f3COXnxExycz7Aj38ISRU64EvYtTxFIG87F84UY80h8
Feb 02 10:37:08 np0005604929.novalocal systemd-logind[793]: New session 3 of user zuul.
Feb 02 10:37:08 np0005604929.novalocal systemd[1]: Started Session 3 of User zuul.
Feb 02 10:37:08 np0005604929.novalocal sshd-session[7300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 10:37:08 np0005604929.novalocal sudo[7379]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciahqixqsdecktrwpahgkiepmzirpnod ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 02 10:37:08 np0005604929.novalocal sudo[7379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:37:08 np0005604929.novalocal python3[7381]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:37:08 np0005604929.novalocal sudo[7379]: pam_unix(sudo:session): session closed for user root
Feb 02 10:37:08 np0005604929.novalocal sudo[7452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmtzxctwzdsvbmopjxccccurmnhwlwsi ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 02 10:37:08 np0005604929.novalocal sudo[7452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:37:08 np0005604929.novalocal python3[7454]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770028628.166094-373-198839989854361/source _original_basename=tmp9falmmk3 follow=False checksum=f74a077a45d90d4103dda4843bc3227230c4511d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:37:08 np0005604929.novalocal sudo[7452]: pam_unix(sudo:session): session closed for user root
Feb 02 10:37:12 np0005604929.novalocal sshd-session[7303]: Connection closed by 38.102.83.114 port 57966
Feb 02 10:37:12 np0005604929.novalocal sshd-session[7300]: pam_unix(sshd:session): session closed for user zuul
Feb 02 10:37:12 np0005604929.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Feb 02 10:37:12 np0005604929.novalocal systemd-logind[793]: Session 3 logged out. Waiting for processes to exit.
Feb 02 10:37:12 np0005604929.novalocal systemd-logind[793]: Removed session 3.
Feb 02 10:38:39 np0005604929.novalocal sshd-session[7481]: Invalid user solana from 80.94.92.186 port 46454
Feb 02 10:38:39 np0005604929.novalocal sshd-session[7481]: Connection closed by invalid user solana 80.94.92.186 port 46454 [preauth]
Feb 02 10:38:43 np0005604929.novalocal sshd-session[7483]: Received disconnect from 195.178.110.15 port 45610:11:  [preauth]
Feb 02 10:38:43 np0005604929.novalocal sshd-session[7483]: Disconnected from authenticating user root 195.178.110.15 port 45610 [preauth]
Feb 02 10:38:52 np0005604929.novalocal systemd[4309]: Created slice User Background Tasks Slice.
Feb 02 10:38:52 np0005604929.novalocal systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Feb 02 10:38:52 np0005604929.novalocal systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Feb 02 10:40:47 np0005604929.novalocal sshd-session[7488]: Invalid user AdminGPON from 45.148.10.121 port 44058
Feb 02 10:40:47 np0005604929.novalocal sshd-session[7488]: Connection closed by invalid user AdminGPON 45.148.10.121 port 44058 [preauth]
Feb 02 10:42:49 np0005604929.novalocal sshd-session[7490]: Invalid user admin from 43.252.231.122 port 55404
Feb 02 10:42:49 np0005604929.novalocal sshd-session[7490]: Connection closed by invalid user admin 43.252.231.122 port 55404 [preauth]
Feb 02 10:43:03 np0005604929.novalocal sshd-session[7492]: Invalid user solana from 80.94.92.186 port 49612
Feb 02 10:43:03 np0005604929.novalocal sshd-session[7492]: Connection closed by invalid user solana 80.94.92.186 port 49612 [preauth]
Feb 02 10:44:22 np0005604929.novalocal sshd-session[7494]: Received disconnect from 45.148.10.151 port 20838:11:  [preauth]
Feb 02 10:44:22 np0005604929.novalocal sshd-session[7494]: Disconnected from authenticating user root 45.148.10.151 port 20838 [preauth]
Feb 02 10:45:29 np0005604929.novalocal sshd-session[7498]: Accepted publickey for zuul from 38.102.83.114 port 51118 ssh2: RSA SHA256:f3COXnxExycz7Aj38ISRU64EvYtTxFIG87F84UY80h8
Feb 02 10:45:29 np0005604929.novalocal systemd-logind[793]: New session 4 of user zuul.
Feb 02 10:45:29 np0005604929.novalocal systemd[1]: Started Session 4 of User zuul.
Feb 02 10:45:29 np0005604929.novalocal sshd-session[7498]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 10:45:29 np0005604929.novalocal sudo[7525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohyeofpsqupxemygmomajonxfbevsmnk ; /usr/bin/python3'
Feb 02 10:45:29 np0005604929.novalocal sudo[7525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:29 np0005604929.novalocal python3[7527]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-74f7-1c31-000000002181-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:45:29 np0005604929.novalocal sudo[7525]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:29 np0005604929.novalocal sudo[7553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzfuyxspfenjogitzxwddvlpoiysdoqs ; /usr/bin/python3'
Feb 02 10:45:29 np0005604929.novalocal sudo[7553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:29 np0005604929.novalocal python3[7555]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:45:29 np0005604929.novalocal sudo[7553]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:29 np0005604929.novalocal sudo[7580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sefxllybaulxpqlzoulgpuczxtdalmuq ; /usr/bin/python3'
Feb 02 10:45:29 np0005604929.novalocal sudo[7580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:30 np0005604929.novalocal python3[7582]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:45:30 np0005604929.novalocal sudo[7580]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:30 np0005604929.novalocal sudo[7606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkkuqbrssicqseotzhfcbutownkkpavp ; /usr/bin/python3'
Feb 02 10:45:30 np0005604929.novalocal sudo[7606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:30 np0005604929.novalocal python3[7608]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:45:30 np0005604929.novalocal sudo[7606]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:30 np0005604929.novalocal sudo[7632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pulmngoyeceejpvyrycyhgmxmfjndano ; /usr/bin/python3'
Feb 02 10:45:30 np0005604929.novalocal sudo[7632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:30 np0005604929.novalocal python3[7634]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:45:30 np0005604929.novalocal sudo[7632]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:31 np0005604929.novalocal sudo[7658]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odhyqymrpnkqxraumtmkybmgfiqwebsn ; /usr/bin/python3'
Feb 02 10:45:31 np0005604929.novalocal sudo[7658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:31 np0005604929.novalocal python3[7660]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:45:31 np0005604929.novalocal sudo[7658]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:31 np0005604929.novalocal sudo[7736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzcdylekntqcgawlmrneesjxrceqezhf ; /usr/bin/python3'
Feb 02 10:45:31 np0005604929.novalocal sudo[7736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:31 np0005604929.novalocal python3[7738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:45:31 np0005604929.novalocal sudo[7736]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:32 np0005604929.novalocal sudo[7809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnriukehjbaffvcsrmfitytjklhmavrw ; /usr/bin/python3'
Feb 02 10:45:32 np0005604929.novalocal sudo[7809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:32 np0005604929.novalocal python3[7811]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770029131.7227807-547-56995084109152/source _original_basename=tmptqnounpe follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:45:32 np0005604929.novalocal sudo[7809]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:33 np0005604929.novalocal sudo[7859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnmdqafwpxmgctkyfjrctjuktujmtnus ; /usr/bin/python3'
Feb 02 10:45:33 np0005604929.novalocal sudo[7859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:33 np0005604929.novalocal python3[7861]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 10:45:33 np0005604929.novalocal systemd[1]: Reloading.
Feb 02 10:45:33 np0005604929.novalocal systemd-rc-local-generator[7881]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 10:45:33 np0005604929.novalocal sudo[7859]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:35 np0005604929.novalocal sudo[7916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oihmmuklaxfhovqdmzcwqrywxgaclvgh ; /usr/bin/python3'
Feb 02 10:45:35 np0005604929.novalocal sudo[7916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:35 np0005604929.novalocal python3[7918]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb 02 10:45:35 np0005604929.novalocal sudo[7916]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:35 np0005604929.novalocal sudo[7942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqynqvlfqremiurumsvbfjxwkbqrlsqj ; /usr/bin/python3'
Feb 02 10:45:35 np0005604929.novalocal sudo[7942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:35 np0005604929.novalocal python3[7944]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:45:35 np0005604929.novalocal sudo[7942]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:35 np0005604929.novalocal sudo[7970]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysfgqjinawnxekdlxbteflcilfejbwvl ; /usr/bin/python3'
Feb 02 10:45:35 np0005604929.novalocal sudo[7970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:35 np0005604929.novalocal python3[7972]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:45:35 np0005604929.novalocal sudo[7970]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:36 np0005604929.novalocal sudo[7998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zldtexbsbbnjwuklydccfahtyzyjamfx ; /usr/bin/python3'
Feb 02 10:45:36 np0005604929.novalocal sudo[7998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:36 np0005604929.novalocal python3[8000]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:45:36 np0005604929.novalocal sudo[7998]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:36 np0005604929.novalocal sudo[8026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhemmaekuejrvolvxiovvuoncubtdrou ; /usr/bin/python3'
Feb 02 10:45:36 np0005604929.novalocal sudo[8026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:36 np0005604929.novalocal python3[8028]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:45:36 np0005604929.novalocal sudo[8026]: pam_unix(sudo:session): session closed for user root
Feb 02 10:45:37 np0005604929.novalocal python3[8055]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-74f7-1c31-000000002188-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:45:37 np0005604929.novalocal python3[8085]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 10:45:40 np0005604929.novalocal sshd-session[7501]: Connection closed by 38.102.83.114 port 51118
Feb 02 10:45:40 np0005604929.novalocal sshd-session[7498]: pam_unix(sshd:session): session closed for user zuul
Feb 02 10:45:40 np0005604929.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Feb 02 10:45:40 np0005604929.novalocal systemd[1]: session-4.scope: Consumed 3.541s CPU time.
Feb 02 10:45:40 np0005604929.novalocal systemd-logind[793]: Session 4 logged out. Waiting for processes to exit.
Feb 02 10:45:40 np0005604929.novalocal systemd-logind[793]: Removed session 4.
Feb 02 10:45:42 np0005604929.novalocal sshd-session[8089]: Accepted publickey for zuul from 38.102.83.114 port 33376 ssh2: RSA SHA256:f3COXnxExycz7Aj38ISRU64EvYtTxFIG87F84UY80h8
Feb 02 10:45:42 np0005604929.novalocal systemd-logind[793]: New session 5 of user zuul.
Feb 02 10:45:42 np0005604929.novalocal systemd[1]: Started Session 5 of User zuul.
Feb 02 10:45:42 np0005604929.novalocal sshd-session[8089]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 10:45:42 np0005604929.novalocal sudo[8116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztyzquzwoqanguxbafjzyzowrlpjojci ; /usr/bin/python3'
Feb 02 10:45:42 np0005604929.novalocal sudo[8116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:45:42 np0005604929.novalocal python3[8118]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 10:45:50 np0005604929.novalocal setsebool[8161]: The virt_use_nfs policy boolean was changed to 1 by root
Feb 02 10:45:50 np0005604929.novalocal setsebool[8161]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb 02 10:46:02 np0005604929.novalocal kernel: SELinux:  Converting 385 SID table entries...
Feb 02 10:46:02 np0005604929.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 10:46:02 np0005604929.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 02 10:46:02 np0005604929.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 10:46:02 np0005604929.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 02 10:46:02 np0005604929.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 10:46:02 np0005604929.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 10:46:02 np0005604929.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 10:46:12 np0005604929.novalocal kernel: SELinux:  Converting 388 SID table entries...
Feb 02 10:46:12 np0005604929.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 10:46:12 np0005604929.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 02 10:46:12 np0005604929.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 10:46:12 np0005604929.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 02 10:46:12 np0005604929.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 10:46:12 np0005604929.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 10:46:12 np0005604929.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 10:46:32 np0005604929.novalocal dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 02 10:46:32 np0005604929.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 10:46:32 np0005604929.novalocal systemd[1]: Starting man-db-cache-update.service...
Feb 02 10:46:32 np0005604929.novalocal systemd[1]: Reloading.
Feb 02 10:46:32 np0005604929.novalocal systemd-rc-local-generator[8923]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 10:46:32 np0005604929.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 10:46:36 np0005604929.novalocal sudo[8116]: pam_unix(sudo:session): session closed for user root
Feb 02 10:46:38 np0005604929.novalocal python3[11933]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-d58b-5f8b-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:46:39 np0005604929.novalocal kernel: evm: overlay not supported
Feb 02 10:46:39 np0005604929.novalocal systemd[4309]: Starting D-Bus User Message Bus...
Feb 02 10:46:39 np0005604929.novalocal dbus-broker-launch[12671]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb 02 10:46:39 np0005604929.novalocal dbus-broker-launch[12671]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb 02 10:46:39 np0005604929.novalocal systemd[4309]: Started D-Bus User Message Bus.
Feb 02 10:46:39 np0005604929.novalocal dbus-broker-lau[12671]: Ready
Feb 02 10:46:39 np0005604929.novalocal systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 02 10:46:39 np0005604929.novalocal systemd[4309]: Created slice Slice /user.
Feb 02 10:46:39 np0005604929.novalocal systemd[4309]: podman-12536.scope: unit configures an IP firewall, but not running as root.
Feb 02 10:46:39 np0005604929.novalocal systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Feb 02 10:46:39 np0005604929.novalocal systemd[4309]: Started podman-12536.scope.
Feb 02 10:46:39 np0005604929.novalocal systemd[4309]: Started podman-pause-a7a4ac28.scope.
Feb 02 10:46:40 np0005604929.novalocal sudo[13281]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaludoiwinqexzyxtxaowaxchzlziaqm ; /usr/bin/python3'
Feb 02 10:46:40 np0005604929.novalocal sudo[13281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:46:40 np0005604929.novalocal python3[13300]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.129.56.226:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.129.56.226:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:46:40 np0005604929.novalocal python3[13300]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb 02 10:46:40 np0005604929.novalocal sudo[13281]: pam_unix(sudo:session): session closed for user root
Feb 02 10:46:40 np0005604929.novalocal sshd-session[8092]: Connection closed by 38.102.83.114 port 33376
Feb 02 10:46:40 np0005604929.novalocal sshd-session[8089]: pam_unix(sshd:session): session closed for user zuul
Feb 02 10:46:40 np0005604929.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Feb 02 10:46:40 np0005604929.novalocal systemd[1]: session-5.scope: Consumed 41.849s CPU time.
Feb 02 10:46:40 np0005604929.novalocal systemd-logind[793]: Session 5 logged out. Waiting for processes to exit.
Feb 02 10:46:40 np0005604929.novalocal systemd-logind[793]: Removed session 5.
Feb 02 10:46:58 np0005604929.novalocal sshd-session[23551]: Connection closed by 38.102.83.234 port 51860 [preauth]
Feb 02 10:46:58 np0005604929.novalocal sshd-session[23554]: Connection closed by 38.102.83.234 port 51864 [preauth]
Feb 02 10:46:58 np0005604929.novalocal sshd-session[23556]: Unable to negotiate with 38.102.83.234 port 51874: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 02 10:46:58 np0005604929.novalocal sshd-session[23552]: Unable to negotiate with 38.102.83.234 port 51888: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 02 10:46:58 np0005604929.novalocal sshd-session[23558]: Unable to negotiate with 38.102.83.234 port 51870: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 02 10:47:03 np0005604929.novalocal sshd-session[26353]: Accepted publickey for zuul from 38.102.83.114 port 55858 ssh2: RSA SHA256:f3COXnxExycz7Aj38ISRU64EvYtTxFIG87F84UY80h8
Feb 02 10:47:03 np0005604929.novalocal systemd-logind[793]: New session 6 of user zuul.
Feb 02 10:47:03 np0005604929.novalocal systemd[1]: Started Session 6 of User zuul.
Feb 02 10:47:03 np0005604929.novalocal sshd-session[26353]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 10:47:03 np0005604929.novalocal python3[26496]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLX82pzsSIVDa0CjImPeC1wB1hQ7U2V/yr7L+o40iy8yTYZN5/KKkOWZ5fAYC94BVkiawq7nPT9NwmT5CkFcboE= zuul@np0005604928.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:47:03 np0005604929.novalocal sudo[26720]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpqvmmfjuahvriqlrveznpnutkzfsabe ; /usr/bin/python3'
Feb 02 10:47:03 np0005604929.novalocal sudo[26720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:47:03 np0005604929.novalocal python3[26731]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLX82pzsSIVDa0CjImPeC1wB1hQ7U2V/yr7L+o40iy8yTYZN5/KKkOWZ5fAYC94BVkiawq7nPT9NwmT5CkFcboE= zuul@np0005604928.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:47:03 np0005604929.novalocal sudo[26720]: pam_unix(sudo:session): session closed for user root
Feb 02 10:47:04 np0005604929.novalocal sudo[27196]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijqbvxmpsetrpjbgxzbipmmnkbguexxp ; /usr/bin/python3'
Feb 02 10:47:04 np0005604929.novalocal sudo[27196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:47:04 np0005604929.novalocal python3[27206]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005604929.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 02 10:47:04 np0005604929.novalocal useradd[27352]: new group: name=cloud-admin, GID=1002
Feb 02 10:47:04 np0005604929.novalocal useradd[27352]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Feb 02 10:47:04 np0005604929.novalocal sudo[27196]: pam_unix(sudo:session): session closed for user root
Feb 02 10:47:04 np0005604929.novalocal sudo[27541]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgwvrjlehrsksqizgmteitpswhldsmtn ; /usr/bin/python3'
Feb 02 10:47:04 np0005604929.novalocal sudo[27541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:47:05 np0005604929.novalocal python3[27550]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLX82pzsSIVDa0CjImPeC1wB1hQ7U2V/yr7L+o40iy8yTYZN5/KKkOWZ5fAYC94BVkiawq7nPT9NwmT5CkFcboE= zuul@np0005604928.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 10:47:05 np0005604929.novalocal sudo[27541]: pam_unix(sudo:session): session closed for user root
Feb 02 10:47:05 np0005604929.novalocal sudo[27895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwbebfznqpiofmuqdyhtloqcxijppsum ; /usr/bin/python3'
Feb 02 10:47:05 np0005604929.novalocal sudo[27895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:47:05 np0005604929.novalocal python3[27904]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:47:05 np0005604929.novalocal sudo[27895]: pam_unix(sudo:session): session closed for user root
Feb 02 10:47:05 np0005604929.novalocal sudo[28229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpcgmogtezoljtpqdhehrmlsttvcptrm ; /usr/bin/python3'
Feb 02 10:47:05 np0005604929.novalocal sudo[28229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:47:05 np0005604929.novalocal python3[28239]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1770029225.2470074-167-126593936749574/source _original_basename=tmpbr777lhk follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:47:05 np0005604929.novalocal sudo[28229]: pam_unix(sudo:session): session closed for user root
Feb 02 10:47:06 np0005604929.novalocal sudo[28724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlbrkynxxtgbzvvkvkobsucxycxzqeon ; /usr/bin/python3'
Feb 02 10:47:06 np0005604929.novalocal sudo[28724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:47:06 np0005604929.novalocal python3[28734]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb 02 10:47:06 np0005604929.novalocal systemd[1]: Starting Hostname Service...
Feb 02 10:47:06 np0005604929.novalocal systemd[1]: Started Hostname Service.
Feb 02 10:47:06 np0005604929.novalocal systemd-hostnamed[28952]: Changed pretty hostname to 'compute-0'
Feb 02 10:47:06 compute-0 systemd-hostnamed[28952]: Hostname set to <compute-0> (static)
Feb 02 10:47:06 compute-0 NetworkManager[7191]: <info>  [1770029226.9886] hostname: static hostname changed from "np0005604929.novalocal" to "compute-0"
Feb 02 10:47:07 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 10:47:07 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 10:47:07 compute-0 sudo[28724]: pam_unix(sudo:session): session closed for user root
Feb 02 10:47:07 compute-0 sshd-session[26427]: Connection closed by 38.102.83.114 port 55858
Feb 02 10:47:07 compute-0 sshd-session[26353]: pam_unix(sshd:session): session closed for user zuul
Feb 02 10:47:07 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Feb 02 10:47:07 compute-0 systemd[1]: session-6.scope: Consumed 1.988s CPU time.
Feb 02 10:47:07 compute-0 systemd-logind[793]: Session 6 logged out. Waiting for processes to exit.
Feb 02 10:47:07 compute-0 systemd-logind[793]: Removed session 6.
Feb 02 10:47:09 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 10:47:09 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 10:47:09 compute-0 systemd[1]: man-db-cache-update.service: Consumed 36.244s CPU time.
Feb 02 10:47:09 compute-0 systemd[1]: run-r5f4c2674535440d7964b41de979cdd9c.service: Deactivated successfully.
Feb 02 10:47:17 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 10:47:36 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Feb 02 10:47:36 compute-0 sshd-session[30003]: Invalid user solr from 80.94.92.186 port 52762
Feb 02 10:47:36 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb 02 10:47:36 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Feb 02 10:47:36 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb 02 10:47:36 compute-0 sshd-session[30003]: Connection closed by invalid user solr 80.94.92.186 port 52762 [preauth]
Feb 02 10:47:37 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 02 10:47:49 compute-0 sshd-session[30011]: Connection reset by 147.185.132.27 port 58318 [preauth]
Feb 02 10:50:24 compute-0 sshd-session[30017]: Received disconnect from 91.224.92.190 port 50504:11:  [preauth]
Feb 02 10:50:24 compute-0 sshd-session[30017]: Disconnected from authenticating user root 91.224.92.190 port 50504 [preauth]
Feb 02 10:50:31 compute-0 sshd-session[30020]: Accepted publickey for zuul from 38.102.83.234 port 49504 ssh2: RSA SHA256:f3COXnxExycz7Aj38ISRU64EvYtTxFIG87F84UY80h8
Feb 02 10:50:31 compute-0 systemd-logind[793]: New session 7 of user zuul.
Feb 02 10:50:31 compute-0 systemd[1]: Started Session 7 of User zuul.
Feb 02 10:50:31 compute-0 sshd-session[30020]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 10:50:32 compute-0 python3[30096]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 10:50:34 compute-0 sudo[30210]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkfmthatnysbfjyobmrtfbvslbagydaf ; /usr/bin/python3'
Feb 02 10:50:34 compute-0 sudo[30210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:34 compute-0 python3[30212]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:50:34 compute-0 sudo[30210]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:34 compute-0 sudo[30283]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzjfncdjscryyrhgexinygevzqdrpatd ; /usr/bin/python3'
Feb 02 10:50:34 compute-0 sudo[30283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:34 compute-0 python3[30285]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770029434.0306046-34034-178394197113299/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:50:34 compute-0 sudo[30283]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:34 compute-0 sudo[30309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwypyfgquoipweuvbjxwlvmjukwewkvx ; /usr/bin/python3'
Feb 02 10:50:34 compute-0 sudo[30309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:34 compute-0 python3[30311]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:50:34 compute-0 sudo[30309]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:34 compute-0 sudo[30382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsxtjncstcfxhfojvuawrfkaqbngvzsa ; /usr/bin/python3'
Feb 02 10:50:34 compute-0 sudo[30382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:35 compute-0 python3[30384]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770029434.0306046-34034-178394197113299/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:50:35 compute-0 sudo[30382]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:35 compute-0 sudo[30408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tabofnrpjehltqxuqgfaodgqncnaiwck ; /usr/bin/python3'
Feb 02 10:50:35 compute-0 sudo[30408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:35 compute-0 python3[30410]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:50:35 compute-0 sudo[30408]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:35 compute-0 sudo[30481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uahfftzaygpqnvfawgywlaazxzukvoll ; /usr/bin/python3'
Feb 02 10:50:35 compute-0 sudo[30481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:35 compute-0 python3[30483]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770029434.0306046-34034-178394197113299/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:50:35 compute-0 sudo[30481]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:35 compute-0 sudo[30507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfhqzxmaspuslbjxuexgvnfjywbklxny ; /usr/bin/python3'
Feb 02 10:50:35 compute-0 sudo[30507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:35 compute-0 python3[30509]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:50:35 compute-0 sudo[30507]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:35 compute-0 sudo[30580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdttxbtpavcweqmxrvqfncfsbhetcugi ; /usr/bin/python3'
Feb 02 10:50:35 compute-0 sudo[30580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:36 compute-0 python3[30582]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770029434.0306046-34034-178394197113299/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:50:36 compute-0 sudo[30580]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:36 compute-0 sudo[30606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuahggfkfjeaboliiummwnxnfmjbeire ; /usr/bin/python3'
Feb 02 10:50:36 compute-0 sudo[30606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:36 compute-0 python3[30608]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:50:36 compute-0 sudo[30606]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:36 compute-0 sudo[30679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogvuqwkupxotynvzekpfcoawcbjhhrcf ; /usr/bin/python3'
Feb 02 10:50:36 compute-0 sudo[30679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:36 compute-0 python3[30681]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770029434.0306046-34034-178394197113299/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:50:36 compute-0 sudo[30679]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:36 compute-0 sudo[30705]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygoejvrgxefcabsfuxonjaeyumjgpjqs ; /usr/bin/python3'
Feb 02 10:50:36 compute-0 sudo[30705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:36 compute-0 python3[30707]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:50:36 compute-0 sudo[30705]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:36 compute-0 sudo[30778]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpoognztyziupiapacgyfsgnvhefvhkl ; /usr/bin/python3'
Feb 02 10:50:36 compute-0 sudo[30778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:36 compute-0 python3[30780]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770029434.0306046-34034-178394197113299/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:50:36 compute-0 sudo[30778]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:37 compute-0 sudo[30804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imevpaxalwpljymscybjeeheusyejsbx ; /usr/bin/python3'
Feb 02 10:50:37 compute-0 sudo[30804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:37 compute-0 python3[30806]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 10:50:37 compute-0 sudo[30804]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:37 compute-0 sudo[30877]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prandhlfcmwlbgsvueekzhjyvfhvlgut ; /usr/bin/python3'
Feb 02 10:50:37 compute-0 sudo[30877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 10:50:37 compute-0 python3[30879]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770029434.0306046-34034-178394197113299/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 10:50:37 compute-0 sudo[30877]: pam_unix(sudo:session): session closed for user root
Feb 02 10:50:39 compute-0 sshd-session[30904]: Connection closed by 192.168.122.11 port 54036 [preauth]
Feb 02 10:50:39 compute-0 sshd-session[30905]: Connection closed by 192.168.122.11 port 54042 [preauth]
Feb 02 10:50:39 compute-0 sshd-session[30906]: Unable to negotiate with 192.168.122.11 port 54058: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 02 10:50:39 compute-0 sshd-session[30908]: Unable to negotiate with 192.168.122.11 port 54090: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 02 10:50:39 compute-0 sshd-session[30907]: Unable to negotiate with 192.168.122.11 port 54074: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 02 10:50:48 compute-0 python3[30937]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 10:52:15 compute-0 sshd-session[30939]: Invalid user ubuntu from 80.94.92.186 port 55944
Feb 02 10:52:15 compute-0 sshd-session[30939]: Connection closed by invalid user ubuntu 80.94.92.186 port 55944 [preauth]
Feb 02 10:55:47 compute-0 sshd-session[30023]: Received disconnect from 38.102.83.234 port 49504:11: disconnected by user
Feb 02 10:55:47 compute-0 sshd-session[30023]: Disconnected from user zuul 38.102.83.234 port 49504
Feb 02 10:55:47 compute-0 sshd-session[30020]: pam_unix(sshd:session): session closed for user zuul
Feb 02 10:55:47 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Feb 02 10:55:47 compute-0 systemd[1]: session-7.scope: Consumed 3.765s CPU time.
Feb 02 10:55:47 compute-0 systemd-logind[793]: Session 7 logged out. Waiting for processes to exit.
Feb 02 10:55:47 compute-0 systemd-logind[793]: Removed session 7.
Feb 02 10:56:29 compute-0 sshd-session[30944]: Received disconnect from 45.148.10.157 port 45515:11:  [preauth]
Feb 02 10:56:29 compute-0 sshd-session[30944]: Disconnected from authenticating user root 45.148.10.157 port 45515 [preauth]
Feb 02 10:56:50 compute-0 sshd-session[30946]: Invalid user ubuntu from 80.94.92.186 port 59114
Feb 02 10:56:50 compute-0 sshd-session[30946]: Connection closed by invalid user ubuntu 80.94.92.186 port 59114 [preauth]
Feb 02 10:58:12 compute-0 sshd-session[30948]: Connection closed by 111.61.229.78 port 42954
Feb 02 11:01:01 compute-0 CROND[30950]: (root) CMD (run-parts /etc/cron.hourly)
Feb 02 11:01:01 compute-0 run-parts[30953]: (/etc/cron.hourly) starting 0anacron
Feb 02 11:01:01 compute-0 anacron[30961]: Anacron started on 2026-02-02
Feb 02 11:01:01 compute-0 anacron[30961]: Will run job `cron.daily' in 8 min.
Feb 02 11:01:01 compute-0 anacron[30961]: Will run job `cron.weekly' in 28 min.
Feb 02 11:01:01 compute-0 anacron[30961]: Will run job `cron.monthly' in 48 min.
Feb 02 11:01:01 compute-0 anacron[30961]: Jobs will be executed sequentially
Feb 02 11:01:01 compute-0 run-parts[30963]: (/etc/cron.hourly) finished 0anacron
Feb 02 11:01:01 compute-0 CROND[30949]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 02 11:01:20 compute-0 sshd-session[30964]: Invalid user ubuntu from 80.94.92.186 port 34032
Feb 02 11:01:20 compute-0 sshd-session[30964]: Connection closed by invalid user ubuntu 80.94.92.186 port 34032 [preauth]
Feb 02 11:01:40 compute-0 sshd-session[30966]: Accepted publickey for zuul from 192.168.122.30 port 35352 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:01:40 compute-0 systemd-logind[793]: New session 8 of user zuul.
Feb 02 11:01:40 compute-0 systemd[1]: Started Session 8 of User zuul.
Feb 02 11:01:40 compute-0 sshd-session[30966]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:01:41 compute-0 python3.9[31119]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:01:42 compute-0 sudo[31298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqgzodcafyjwhamhrwbmccmpyzdtkesl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030102.2081704-51-121308277633573/AnsiballZ_command.py'
Feb 02 11:01:42 compute-0 sudo[31298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:01:42 compute-0 python3.9[31300]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:01:49 compute-0 sudo[31298]: pam_unix(sudo:session): session closed for user root
Feb 02 11:01:50 compute-0 sshd-session[30969]: Connection closed by 192.168.122.30 port 35352
Feb 02 11:01:50 compute-0 sshd-session[30966]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:01:50 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Feb 02 11:01:50 compute-0 systemd[1]: session-8.scope: Consumed 7.669s CPU time.
Feb 02 11:01:50 compute-0 systemd-logind[793]: Session 8 logged out. Waiting for processes to exit.
Feb 02 11:01:50 compute-0 systemd-logind[793]: Removed session 8.
Feb 02 11:02:05 compute-0 sshd-session[31357]: Accepted publickey for zuul from 192.168.122.30 port 51888 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:02:05 compute-0 systemd-logind[793]: New session 9 of user zuul.
Feb 02 11:02:05 compute-0 systemd[1]: Started Session 9 of User zuul.
Feb 02 11:02:05 compute-0 sshd-session[31357]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:02:06 compute-0 python3.9[31510]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 02 11:02:07 compute-0 python3.9[31684]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:02:08 compute-0 sudo[31834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ithtgwryaetyutkzmqfgozsshtjkompg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030127.8881862-88-155245050636577/AnsiballZ_command.py'
Feb 02 11:02:08 compute-0 sudo[31834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:02:08 compute-0 python3.9[31836]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:02:08 compute-0 sudo[31834]: pam_unix(sudo:session): session closed for user root
Feb 02 11:02:09 compute-0 sudo[31987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jterhgluwluefzzcwboewuzdgvypboks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030128.8984919-124-218453719878692/AnsiballZ_stat.py'
Feb 02 11:02:09 compute-0 sudo[31987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:02:09 compute-0 python3.9[31989]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:02:09 compute-0 sudo[31987]: pam_unix(sudo:session): session closed for user root
Feb 02 11:02:10 compute-0 sudo[32139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihxgzntdevoovllvwuxpdwogycrtjdsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030129.6723855-148-259840215529521/AnsiballZ_file.py'
Feb 02 11:02:10 compute-0 sudo[32139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:02:10 compute-0 python3.9[32141]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:02:10 compute-0 sudo[32139]: pam_unix(sudo:session): session closed for user root
Feb 02 11:02:10 compute-0 sudo[32291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjebruigxlzceatzrphfmkxrrapwekhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030130.4046981-172-189387186004649/AnsiballZ_stat.py'
Feb 02 11:02:10 compute-0 sudo[32291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:02:10 compute-0 python3.9[32293]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:02:10 compute-0 sudo[32291]: pam_unix(sudo:session): session closed for user root
Feb 02 11:02:11 compute-0 sudo[32414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhjaqcaslvvnexhinyoabnvsotqcaxmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030130.4046981-172-189387186004649/AnsiballZ_copy.py'
Feb 02 11:02:11 compute-0 sudo[32414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:02:11 compute-0 python3.9[32416]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030130.4046981-172-189387186004649/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:02:11 compute-0 sudo[32414]: pam_unix(sudo:session): session closed for user root
Feb 02 11:02:11 compute-0 sudo[32566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fudotfscrmhpggslkmnmwwvkhwunyouw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030131.63452-217-131851977545238/AnsiballZ_setup.py'
Feb 02 11:02:11 compute-0 sudo[32566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:02:12 compute-0 python3.9[32568]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:02:12 compute-0 sudo[32566]: pam_unix(sudo:session): session closed for user root
Feb 02 11:02:12 compute-0 sudo[32723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yroiejscaitjsbswgxrnbpurbeiuehdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030132.5951738-241-60842149495225/AnsiballZ_file.py'
Feb 02 11:02:12 compute-0 sudo[32723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:02:13 compute-0 python3.9[32725]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:02:13 compute-0 sudo[32723]: pam_unix(sudo:session): session closed for user root
Feb 02 11:02:13 compute-0 sudo[32875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mniznlbpxvwvakdvzgmfadpszamxjbiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030133.2247612-268-56118822867146/AnsiballZ_file.py'
Feb 02 11:02:13 compute-0 sudo[32875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:02:13 compute-0 python3.9[32877]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:02:13 compute-0 sudo[32875]: pam_unix(sudo:session): session closed for user root
Feb 02 11:02:14 compute-0 python3.9[33027]: ansible-ansible.builtin.service_facts Invoked
Feb 02 11:02:18 compute-0 python3.9[33280]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:02:19 compute-0 python3.9[33430]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:02:20 compute-0 python3.9[33584]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:02:21 compute-0 sudo[33740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nczqelqfxzbipryfciwlzrcltznfhhqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030140.8644996-412-179138173396318/AnsiballZ_setup.py'
Feb 02 11:02:21 compute-0 sudo[33740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:02:21 compute-0 python3.9[33742]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:02:21 compute-0 sudo[33740]: pam_unix(sudo:session): session closed for user root
Feb 02 11:02:22 compute-0 sudo[33824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qggpdxwdwabuqskucijpvuocfdnvxyyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030140.8644996-412-179138173396318/AnsiballZ_dnf.py'
Feb 02 11:02:22 compute-0 sudo[33824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:02:22 compute-0 python3.9[33826]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:02:42 compute-0 sshd-session[33941]: Received disconnect from 91.224.92.54 port 31576:11:  [preauth]
Feb 02 11:02:42 compute-0 sshd-session[33941]: Disconnected from authenticating user root 91.224.92.54 port 31576 [preauth]
Feb 02 11:03:09 compute-0 systemd[1]: Reloading.
Feb 02 11:03:09 compute-0 systemd-rc-local-generator[34026]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:03:10 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb 02 11:03:10 compute-0 systemd[1]: Reloading.
Feb 02 11:03:10 compute-0 systemd-rc-local-generator[34064]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:03:10 compute-0 systemd[1]: Starting dnf makecache...
Feb 02 11:03:10 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb 02 11:03:10 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb 02 11:03:10 compute-0 systemd[1]: Reloading.
Feb 02 11:03:10 compute-0 systemd-rc-local-generator[34104]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:03:10 compute-0 dnf[34076]: Failed determining last makecache time.
Feb 02 11:03:10 compute-0 dnf[34076]: delorean-openstack-barbican-42b4c41831408a8e323 144 kB/s | 3.0 kB     00:00
Feb 02 11:03:10 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Feb 02 11:03:10 compute-0 dnf[34076]: delorean-python-glean-642fffe0203a8ffcc2443db52 161 kB/s | 3.0 kB     00:00
Feb 02 11:03:10 compute-0 dnf[34076]: delorean-openstack-cinder-1c00d6490d88e436f26ef 176 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-python-stevedore-c4acc5639fd2329372142 181 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-python-cloudkitty-tests-tempest-783703 162 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-diskimage-builder-61b717cc45660834fe9a 151 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-openstack-nova-eaa65f0b85123a4ee343246 177 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-python-designate-tests-tempest-347fdbc 168 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-openstack-glance-1fd12c29b339f30fe823e 180 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 184 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-openstack-manila-d783d10e75495b73866db 160 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-openstack-neutron-95cadbd379667c8520c8 170 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 02 11:03:11 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-openstack-octavia-5975097dd4b021385178 165 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-openstack-watcher-c014f81a8647287f6dcc 163 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-python-tcib-78032d201b02cee27e8e644c61 165 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 153 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-openstack-swift-dc98a8463506ac520c469a 185 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-python-tempestconf-8515371b7cceebd4282 167 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: delorean-openstack-heat-ui-013accbfd179753bc3f0 165 kB/s | 3.0 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: CentOS Stream 9 - BaseOS                         64 kB/s | 6.7 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: CentOS Stream 9 - AppStream                      53 kB/s | 6.8 kB     00:00
Feb 02 11:03:11 compute-0 dnf[34076]: CentOS Stream 9 - CRB                            64 kB/s | 6.6 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: CentOS Stream 9 - Extras packages                31 kB/s | 7.3 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: dlrn-antelope-testing                           165 kB/s | 3.0 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: dlrn-antelope-build-deps                        184 kB/s | 3.0 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: centos9-rabbitmq                                 98 kB/s | 3.0 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: centos9-storage                                 126 kB/s | 3.0 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: centos9-opstools                                128 kB/s | 3.0 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: NFV SIG OpenvSwitch                             125 kB/s | 3.0 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: repo-setup-centos-appstream                     158 kB/s | 4.4 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: repo-setup-centos-baseos                        176 kB/s | 3.9 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: repo-setup-centos-highavailability              153 kB/s | 3.9 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: repo-setup-centos-powertools                    170 kB/s | 4.3 kB     00:00
Feb 02 11:03:12 compute-0 dnf[34076]: Extra Packages for Enterprise Linux 9 - x86_64  101 kB/s |  30 kB     00:00
Feb 02 11:03:13 compute-0 dnf[34076]: Metadata cache created.
Feb 02 11:03:13 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 02 11:03:13 compute-0 systemd[1]: Finished dnf makecache.
Feb 02 11:03:13 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.881s CPU time.
Feb 02 11:04:16 compute-0 kernel: SELinux:  Converting 2727 SID table entries...
Feb 02 11:04:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 11:04:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 11:04:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 11:04:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 11:04:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 11:04:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 11:04:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 11:04:16 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb 02 11:04:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 11:04:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 11:04:16 compute-0 systemd[1]: Reloading.
Feb 02 11:04:17 compute-0 systemd-rc-local-generator[34461]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:04:17 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 11:04:17 compute-0 sudo[33824]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 11:04:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 11:04:17 compute-0 systemd[1]: run-re184c944f8f64d61bc9c3c19af6168ef.service: Deactivated successfully.
Feb 02 11:04:17 compute-0 sudo[35375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aswxemhkzpdeftyyhawagoxmuiyalren ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030257.6651704-448-148939258665586/AnsiballZ_command.py'
Feb 02 11:04:17 compute-0 sudo[35375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:18 compute-0 python3.9[35377]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:04:18 compute-0 sudo[35375]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:19 compute-0 sudo[35656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hauvnrhhaplccunbfqspyajrdafouwcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030259.2365248-472-97721008985869/AnsiballZ_selinux.py'
Feb 02 11:04:19 compute-0 sudo[35656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:20 compute-0 python3.9[35658]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 02 11:04:20 compute-0 sudo[35656]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:20 compute-0 sudo[35808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqnjpkkhiytoonhuxgoacyhicpuapcwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030260.440539-505-214455755366177/AnsiballZ_command.py'
Feb 02 11:04:20 compute-0 sudo[35808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:20 compute-0 python3.9[35810]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 02 11:04:22 compute-0 sudo[35808]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:22 compute-0 sudo[35961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iitpflqovzgqgqoprgubueozjsnzglcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030262.3471265-529-242781960539314/AnsiballZ_file.py'
Feb 02 11:04:22 compute-0 sudo[35961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:23 compute-0 python3.9[35963]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:04:23 compute-0 sudo[35961]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:25 compute-0 sudo[36113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwzydnynojnzperthtqxjgsstfceuxor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030264.8072593-553-265460225599301/AnsiballZ_mount.py'
Feb 02 11:04:25 compute-0 sudo[36113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:25 compute-0 python3.9[36115]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 02 11:04:25 compute-0 sudo[36113]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:26 compute-0 sudo[36265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctgaeikdbcicknsuaxawkjrbxukzyghf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030266.279207-637-145466826074712/AnsiballZ_file.py'
Feb 02 11:04:26 compute-0 sudo[36265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:26 compute-0 python3.9[36267]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:04:26 compute-0 sudo[36265]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:27 compute-0 sudo[36417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxutnsubfperdanthwevnbwaduiftoph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030266.87051-661-133858782761897/AnsiballZ_stat.py'
Feb 02 11:04:27 compute-0 sudo[36417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:27 compute-0 python3.9[36419]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:04:27 compute-0 sudo[36417]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:27 compute-0 sudo[36540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcylxnmndypcbqifzpoqcahtfveszwng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030266.87051-661-133858782761897/AnsiballZ_copy.py'
Feb 02 11:04:27 compute-0 sudo[36540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:33 compute-0 python3.9[36542]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030266.87051-661-133858782761897/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a812cb1395422830114aed94a0605874e7a92a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:04:33 compute-0 sudo[36540]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:35 compute-0 sudo[36693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyoprzzdilbcmcqbglrospazwalkydnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030275.3251176-733-28211610264587/AnsiballZ_stat.py'
Feb 02 11:04:35 compute-0 sudo[36693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:35 compute-0 python3.9[36695]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:04:35 compute-0 sudo[36693]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:36 compute-0 sudo[36845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnuxppvoncgqgvmcoqbhgrgpnhtkwoow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030275.8349721-757-263769410918656/AnsiballZ_command.py'
Feb 02 11:04:36 compute-0 sudo[36845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:36 compute-0 python3.9[36847]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:04:36 compute-0 sudo[36845]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:36 compute-0 sudo[36998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmhwrjzpdzcacoifszjrhtuynddjcqra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030276.477306-781-119410626804808/AnsiballZ_file.py'
Feb 02 11:04:36 compute-0 sudo[36998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:36 compute-0 python3.9[37000]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:04:36 compute-0 sudo[36998]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:37 compute-0 sudo[37150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zncfpggeeykxzmfgdwscacvpqxnlugdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030277.35491-814-95180234756883/AnsiballZ_getent.py'
Feb 02 11:04:37 compute-0 sudo[37150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:37 compute-0 python3.9[37152]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 02 11:04:37 compute-0 sudo[37150]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:37 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:04:37 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:04:38 compute-0 sudo[37304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrpfbdmhitqimmmgvdrgblaetzyajuqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030278.1231065-838-113616888147211/AnsiballZ_group.py'
Feb 02 11:04:38 compute-0 sudo[37304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:38 compute-0 python3.9[37306]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 11:04:38 compute-0 groupadd[37307]: group added to /etc/group: name=qemu, GID=107
Feb 02 11:04:38 compute-0 groupadd[37307]: group added to /etc/gshadow: name=qemu
Feb 02 11:04:38 compute-0 groupadd[37307]: new group: name=qemu, GID=107
Feb 02 11:04:38 compute-0 sudo[37304]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:39 compute-0 sudo[37462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kktulhpqdljuctwgvrfgyjurpbzacdpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030279.0023384-862-134408325862625/AnsiballZ_user.py'
Feb 02 11:04:39 compute-0 sudo[37462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:39 compute-0 python3.9[37464]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 02 11:04:39 compute-0 useradd[37466]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Feb 02 11:04:39 compute-0 sudo[37462]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:40 compute-0 sudo[37622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gckovmqpxpcnisaanzvwommfrqtbakap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030279.8548927-886-132467126686261/AnsiballZ_getent.py'
Feb 02 11:04:40 compute-0 sudo[37622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:40 compute-0 python3.9[37624]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 02 11:04:40 compute-0 sudo[37622]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:40 compute-0 sudo[37775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axlurkoawaoyzdytbjdsaqlhwqcfayka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030280.426713-910-209709922440165/AnsiballZ_group.py'
Feb 02 11:04:40 compute-0 sudo[37775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:40 compute-0 python3.9[37777]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 11:04:40 compute-0 groupadd[37778]: group added to /etc/group: name=hugetlbfs, GID=42477
Feb 02 11:04:40 compute-0 groupadd[37778]: group added to /etc/gshadow: name=hugetlbfs
Feb 02 11:04:40 compute-0 groupadd[37778]: new group: name=hugetlbfs, GID=42477
Feb 02 11:04:40 compute-0 sudo[37775]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:41 compute-0 sudo[37933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvbmxndkjlypcqjcftckyebwenojblyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030281.1078389-937-106023354865158/AnsiballZ_file.py'
Feb 02 11:04:41 compute-0 sudo[37933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:41 compute-0 python3.9[37935]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 02 11:04:41 compute-0 sudo[37933]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:42 compute-0 sudo[38085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbhupnkgzdheevjngocyulfnjxpxvoum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030282.0259013-970-30029281366019/AnsiballZ_dnf.py'
Feb 02 11:04:42 compute-0 sudo[38085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:42 compute-0 python3.9[38087]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:04:44 compute-0 sudo[38085]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:44 compute-0 sudo[38238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkgjbcqgviivrsvyjkmrhyzvwcihbwfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030284.674987-994-170954392860175/AnsiballZ_file.py'
Feb 02 11:04:44 compute-0 sudo[38238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:45 compute-0 python3.9[38240]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:04:45 compute-0 sudo[38238]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:45 compute-0 sudo[38390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcbwuaaeemphiyeeviuhbeazldbksbhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030285.2466352-1018-107617280279512/AnsiballZ_stat.py'
Feb 02 11:04:45 compute-0 sudo[38390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:45 compute-0 python3.9[38392]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:04:45 compute-0 sudo[38390]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:45 compute-0 sudo[38513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmhinqidvdikawqglabwpapzbbuqhwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030285.2466352-1018-107617280279512/AnsiballZ_copy.py'
Feb 02 11:04:45 compute-0 sudo[38513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:46 compute-0 python3.9[38515]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770030285.2466352-1018-107617280279512/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:04:46 compute-0 sudo[38513]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:46 compute-0 sudo[38665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfnfxjmlrleklnkkfcdtguuesuojioya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030286.3114343-1063-187631986029989/AnsiballZ_systemd.py'
Feb 02 11:04:46 compute-0 sudo[38665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:47 compute-0 python3.9[38667]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:04:47 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 02 11:04:47 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 02 11:04:47 compute-0 kernel: Bridge firewalling registered
Feb 02 11:04:47 compute-0 systemd-modules-load[38671]: Inserted module 'br_netfilter'
Feb 02 11:04:47 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 02 11:04:47 compute-0 sudo[38665]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:47 compute-0 sudo[38825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbamabbtnnmhmgpoemqlortfpbvqutkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030287.4294214-1087-49437575560037/AnsiballZ_stat.py'
Feb 02 11:04:47 compute-0 sudo[38825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:47 compute-0 python3.9[38827]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:04:47 compute-0 sudo[38825]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:48 compute-0 sudo[38948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxcunqzjqjtlpsmqybrgrfzafbtibcle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030287.4294214-1087-49437575560037/AnsiballZ_copy.py'
Feb 02 11:04:48 compute-0 sudo[38948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:48 compute-0 python3.9[38950]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770030287.4294214-1087-49437575560037/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:04:48 compute-0 sudo[38948]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:49 compute-0 sudo[39100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnpeiwlcshwxoovpkvryvzkqbbopvzyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030288.8729546-1141-159965949459241/AnsiballZ_dnf.py'
Feb 02 11:04:49 compute-0 sudo[39100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:04:49 compute-0 python3.9[39102]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:04:53 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 02 11:04:53 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 02 11:04:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 11:04:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 11:04:54 compute-0 systemd[1]: Reloading.
Feb 02 11:04:54 compute-0 systemd-rc-local-generator[39163]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:04:54 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 11:04:54 compute-0 sudo[39100]: pam_unix(sudo:session): session closed for user root
Feb 02 11:04:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 11:04:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 11:04:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.482s CPU time.
Feb 02 11:04:57 compute-0 systemd[1]: run-refac8c9b097b4b6f8813f92c65bdcfcb.service: Deactivated successfully.
Feb 02 11:04:57 compute-0 python3.9[42880]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:04:58 compute-0 python3.9[43032]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 02 11:04:59 compute-0 python3.9[43182]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:04:59 compute-0 sudo[43332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqrtsvxhluvpyjqvobpwuczrcbfmvmdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030299.7353241-1258-43869250363360/AnsiballZ_command.py'
Feb 02 11:04:59 compute-0 sudo[43332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:00 compute-0 python3.9[43334]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:05:00 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 02 11:05:00 compute-0 systemd[1]: Starting Authorization Manager...
Feb 02 11:05:00 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 02 11:05:00 compute-0 polkitd[43551]: Started polkitd version 0.117
Feb 02 11:05:00 compute-0 polkitd[43551]: Loading rules from directory /etc/polkit-1/rules.d
Feb 02 11:05:00 compute-0 polkitd[43551]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 02 11:05:00 compute-0 polkitd[43551]: Finished loading, compiling and executing 2 rules
Feb 02 11:05:00 compute-0 systemd[1]: Started Authorization Manager.
Feb 02 11:05:00 compute-0 polkitd[43551]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 02 11:05:00 compute-0 sudo[43332]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:01 compute-0 sudo[43719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-secfmfuedrrtsyahlxbvhlclruymapdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030301.0668702-1285-171627656836922/AnsiballZ_systemd.py'
Feb 02 11:05:01 compute-0 sudo[43719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:01 compute-0 python3.9[43721]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:05:01 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 02 11:05:01 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Feb 02 11:05:01 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 02 11:05:01 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 02 11:05:01 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 02 11:05:01 compute-0 sudo[43719]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:02 compute-0 python3.9[43883]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 02 11:05:06 compute-0 sudo[44033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmoomfdtebaozcgaewttmekcpkzrtkmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030305.847942-1456-209015399999932/AnsiballZ_systemd.py'
Feb 02 11:05:06 compute-0 sudo[44033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:06 compute-0 python3.9[44035]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:05:06 compute-0 systemd[1]: Reloading.
Feb 02 11:05:06 compute-0 systemd-rc-local-generator[44064]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:05:06 compute-0 sudo[44033]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:07 compute-0 sudo[44221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayncyceqjywauhgbxekxxzpsgmxbfkiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030306.7408512-1456-190957713934797/AnsiballZ_systemd.py'
Feb 02 11:05:07 compute-0 sudo[44221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:07 compute-0 python3.9[44223]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:05:07 compute-0 systemd[1]: Reloading.
Feb 02 11:05:07 compute-0 systemd-rc-local-generator[44245]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:05:07 compute-0 sudo[44221]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:08 compute-0 sudo[44410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wywsdsnvrhfkynlcabfcybzwglczwktb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030307.9159315-1504-80700841005438/AnsiballZ_command.py'
Feb 02 11:05:08 compute-0 sudo[44410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:08 compute-0 python3.9[44412]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:05:08 compute-0 sudo[44410]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:08 compute-0 sudo[44563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hntiogugdfrzpeqpiwicyipzjjkculwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030308.5578604-1528-23787103307334/AnsiballZ_command.py'
Feb 02 11:05:08 compute-0 sudo[44563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:08 compute-0 python3.9[44565]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:05:08 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb 02 11:05:08 compute-0 sudo[44563]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:09 compute-0 sudo[44716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrysolaxgcoremcbmzymsoupmtksiedg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030309.1767042-1552-72834913406161/AnsiballZ_command.py'
Feb 02 11:05:09 compute-0 sudo[44716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:09 compute-0 python3.9[44718]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:05:10 compute-0 sudo[44716]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:11 compute-0 sudo[44878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rktcgpugobajnwhqizmbvomdxhkbdcql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030311.1341834-1576-124891617556316/AnsiballZ_command.py'
Feb 02 11:05:11 compute-0 sudo[44878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:11 compute-0 python3.9[44880]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:05:11 compute-0 sudo[44878]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:11 compute-0 sudo[45031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqtrzxxhkbkwdhviqcwczzooisysdhjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030311.7423577-1600-195050922338289/AnsiballZ_systemd.py'
Feb 02 11:05:11 compute-0 sudo[45031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:12 compute-0 python3.9[45033]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:05:12 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 02 11:05:12 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Feb 02 11:05:12 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Feb 02 11:05:12 compute-0 systemd[1]: Starting Apply Kernel Variables...
Feb 02 11:05:12 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 02 11:05:12 compute-0 systemd[1]: Finished Apply Kernel Variables.
Feb 02 11:05:12 compute-0 sudo[45031]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:12 compute-0 sshd-session[31360]: Connection closed by 192.168.122.30 port 51888
Feb 02 11:05:12 compute-0 sshd-session[31357]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:05:12 compute-0 systemd-logind[793]: Session 9 logged out. Waiting for processes to exit.
Feb 02 11:05:12 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Feb 02 11:05:12 compute-0 systemd[1]: session-9.scope: Consumed 2min 14.223s CPU time.
Feb 02 11:05:12 compute-0 systemd-logind[793]: Removed session 9.
Feb 02 11:05:18 compute-0 sshd-session[45063]: Accepted publickey for zuul from 192.168.122.30 port 54220 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:05:18 compute-0 systemd-logind[793]: New session 10 of user zuul.
Feb 02 11:05:18 compute-0 systemd[1]: Started Session 10 of User zuul.
Feb 02 11:05:18 compute-0 sshd-session[45063]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:05:19 compute-0 python3.9[45216]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:05:20 compute-0 sudo[45370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftbxirpdfouwyoenccxcwullixswqeam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030319.9552999-63-128672882230532/AnsiballZ_getent.py'
Feb 02 11:05:20 compute-0 sudo[45370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:20 compute-0 python3.9[45372]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 02 11:05:20 compute-0 sudo[45370]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:21 compute-0 sudo[45523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kebxwjcygakdvgbatawrgwzmuzyuzlkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030320.6620433-87-203678661527234/AnsiballZ_group.py'
Feb 02 11:05:21 compute-0 sudo[45523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:21 compute-0 python3.9[45525]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 11:05:21 compute-0 groupadd[45526]: group added to /etc/group: name=openvswitch, GID=42476
Feb 02 11:05:21 compute-0 groupadd[45526]: group added to /etc/gshadow: name=openvswitch
Feb 02 11:05:21 compute-0 groupadd[45526]: new group: name=openvswitch, GID=42476
Feb 02 11:05:21 compute-0 sudo[45523]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:21 compute-0 sudo[45681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsnrbcngtogtbstqcgjwaiijqcxlerjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030321.440672-111-115167830339535/AnsiballZ_user.py'
Feb 02 11:05:21 compute-0 sudo[45681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:22 compute-0 python3.9[45683]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 02 11:05:22 compute-0 useradd[45685]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Feb 02 11:05:22 compute-0 useradd[45685]: add 'openvswitch' to group 'hugetlbfs'
Feb 02 11:05:22 compute-0 useradd[45685]: add 'openvswitch' to shadow group 'hugetlbfs'
Feb 02 11:05:22 compute-0 sudo[45681]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:22 compute-0 sudo[45841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpxfgvwoixtxbxdrpokcfsptpdwigebp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030322.516432-141-132069368941110/AnsiballZ_setup.py'
Feb 02 11:05:22 compute-0 sudo[45841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:23 compute-0 python3.9[45843]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:05:23 compute-0 sudo[45841]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:23 compute-0 sudo[45925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtjeyytmdhqnemyhmbcchvlfibdkzgun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030322.516432-141-132069368941110/AnsiballZ_dnf.py'
Feb 02 11:05:23 compute-0 sudo[45925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:23 compute-0 python3.9[45927]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 02 11:05:26 compute-0 sudo[45925]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:27 compute-0 sudo[46089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfblwwpuowlraljrfbbvlxardadfkcxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030327.0055869-183-11352044273122/AnsiballZ_dnf.py'
Feb 02 11:05:27 compute-0 sudo[46089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:27 compute-0 python3.9[46091]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:05:30 compute-0 sshd-session[46106]: Invalid user ubuntu from 80.94.92.186 port 37196
Feb 02 11:05:30 compute-0 sshd-session[46106]: Connection closed by invalid user ubuntu 80.94.92.186 port 37196 [preauth]
Feb 02 11:05:42 compute-0 kernel: SELinux:  Converting 2739 SID table entries...
Feb 02 11:05:42 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 11:05:42 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 11:05:42 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 11:05:42 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 11:05:42 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 11:05:42 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 11:05:42 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 11:05:42 compute-0 groupadd[46116]: group added to /etc/group: name=unbound, GID=994
Feb 02 11:05:42 compute-0 groupadd[46116]: group added to /etc/gshadow: name=unbound
Feb 02 11:05:42 compute-0 groupadd[46116]: new group: name=unbound, GID=994
Feb 02 11:05:42 compute-0 useradd[46123]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Feb 02 11:05:42 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb 02 11:05:42 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb 02 11:05:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 11:05:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 11:05:43 compute-0 systemd[1]: Reloading.
Feb 02 11:05:43 compute-0 systemd-rc-local-generator[46621]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:05:43 compute-0 systemd-sysv-generator[46624]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:05:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 11:05:44 compute-0 sudo[46089]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 11:05:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 11:05:44 compute-0 systemd[1]: run-rac8ffbea7fec413c9ef4bbfc2d75ae37.service: Deactivated successfully.
Feb 02 11:05:45 compute-0 sudo[47189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llycqclrkvlhpuwritguaeobacbpazay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030344.9288158-207-9477105801889/AnsiballZ_systemd.py'
Feb 02 11:05:45 compute-0 sudo[47189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:45 compute-0 python3.9[47191]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 11:05:45 compute-0 systemd[1]: Reloading.
Feb 02 11:05:45 compute-0 systemd-sysv-generator[47221]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:05:45 compute-0 systemd-rc-local-generator[47218]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:05:46 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Feb 02 11:05:46 compute-0 chown[47233]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb 02 11:05:46 compute-0 ovs-ctl[47238]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb 02 11:05:46 compute-0 ovs-ctl[47238]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb 02 11:05:46 compute-0 ovs-ctl[47238]: Starting ovsdb-server [  OK  ]
Feb 02 11:05:46 compute-0 ovs-vsctl[47287]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb 02 11:05:46 compute-0 ovs-vsctl[47307]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"e4587b97-1121-4d6d-b583-e59641a06362\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb 02 11:05:46 compute-0 ovs-ctl[47238]: Configuring Open vSwitch system IDs [  OK  ]
Feb 02 11:05:46 compute-0 ovs-ctl[47238]: Enabling remote OVSDB managers [  OK  ]
Feb 02 11:05:46 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Feb 02 11:05:46 compute-0 ovs-vsctl[47313]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 02 11:05:46 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb 02 11:05:46 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb 02 11:05:46 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb 02 11:05:46 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Feb 02 11:05:46 compute-0 ovs-ctl[47357]: Inserting openvswitch module [  OK  ]
Feb 02 11:05:46 compute-0 ovs-ctl[47326]: Starting ovs-vswitchd [  OK  ]
Feb 02 11:05:46 compute-0 ovs-vsctl[47376]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 02 11:05:46 compute-0 ovs-ctl[47326]: Enabling remote OVSDB managers [  OK  ]
Feb 02 11:05:46 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb 02 11:05:46 compute-0 systemd[1]: Starting Open vSwitch...
Feb 02 11:05:46 compute-0 systemd[1]: Finished Open vSwitch.
Feb 02 11:05:46 compute-0 sudo[47189]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:47 compute-0 python3.9[47527]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:05:47 compute-0 sudo[47677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqzbsxobwqywzeyqnuxmqnnraabgjsis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030347.5266-261-188049473102929/AnsiballZ_sefcontext.py'
Feb 02 11:05:47 compute-0 sudo[47677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:48 compute-0 python3.9[47679]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 02 11:05:49 compute-0 kernel: SELinux:  Converting 2753 SID table entries...
Feb 02 11:05:49 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 11:05:49 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 11:05:49 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 11:05:49 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 11:05:49 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 11:05:49 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 11:05:49 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 11:05:49 compute-0 sudo[47677]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:50 compute-0 python3.9[47834]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:05:50 compute-0 sudo[47990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjawwuifgucaymqpoxhrjwatvdvpybup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030350.7195466-315-165398039783576/AnsiballZ_dnf.py'
Feb 02 11:05:50 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb 02 11:05:50 compute-0 sudo[47990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:51 compute-0 python3.9[47992]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:05:52 compute-0 sudo[47990]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:53 compute-0 sudo[48143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bczzvmdmfqqrfwoamvtbrmapjjyyflju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030352.7491581-339-57755917277396/AnsiballZ_command.py'
Feb 02 11:05:53 compute-0 sudo[48143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:53 compute-0 python3.9[48145]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:05:53 compute-0 sudo[48143]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:54 compute-0 sudo[48430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwjqmemhypdgrxmzxduoexwxjjijuull ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030354.1560965-363-26061400800992/AnsiballZ_file.py'
Feb 02 11:05:54 compute-0 sudo[48430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:54 compute-0 python3.9[48432]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 02 11:05:54 compute-0 sudo[48430]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:55 compute-0 python3.9[48582]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:05:55 compute-0 sudo[48734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlctagujcvycoaugciwbwarddxfaseso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030355.5981135-411-98773157365733/AnsiballZ_dnf.py'
Feb 02 11:05:55 compute-0 sudo[48734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:56 compute-0 python3.9[48736]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:05:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 11:05:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 11:05:57 compute-0 systemd[1]: Reloading.
Feb 02 11:05:58 compute-0 systemd-rc-local-generator[48777]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:05:58 compute-0 systemd-sysv-generator[48780]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:05:58 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 11:05:58 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 11:05:58 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 11:05:58 compute-0 systemd[1]: run-rf3140f3dc1d64c49b5f5d8588b15f025.service: Deactivated successfully.
Feb 02 11:05:58 compute-0 sudo[48734]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:58 compute-0 sudo[49053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywbbssseaowbzpvwemgqxemqyfmgqlio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030358.7305744-435-62538184928091/AnsiballZ_systemd.py'
Feb 02 11:05:58 compute-0 sudo[49053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:05:59 compute-0 python3.9[49055]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:05:59 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 02 11:05:59 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Feb 02 11:05:59 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Feb 02 11:05:59 compute-0 systemd[1]: Stopping Network Manager...
Feb 02 11:05:59 compute-0 NetworkManager[7191]: <info>  [1770030359.2719] caught SIGTERM, shutting down normally.
Feb 02 11:05:59 compute-0 NetworkManager[7191]: <info>  [1770030359.2733] dhcp4 (eth0): canceled DHCP transaction
Feb 02 11:05:59 compute-0 NetworkManager[7191]: <info>  [1770030359.2733] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 11:05:59 compute-0 NetworkManager[7191]: <info>  [1770030359.2733] dhcp4 (eth0): state changed no lease
Feb 02 11:05:59 compute-0 NetworkManager[7191]: <info>  [1770030359.2735] manager: NetworkManager state is now CONNECTED_SITE
Feb 02 11:05:59 compute-0 NetworkManager[7191]: <info>  [1770030359.2789] exiting (success)
Feb 02 11:05:59 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 11:05:59 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 11:05:59 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 02 11:05:59 compute-0 systemd[1]: Stopped Network Manager.
Feb 02 11:05:59 compute-0 systemd[1]: NetworkManager.service: Consumed 16.244s CPU time, 4.1M memory peak, read 0B from disk, written 17.0K to disk.
Feb 02 11:05:59 compute-0 systemd[1]: Starting Network Manager...
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.3340] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:d0c6c7d7-6431-45ce-a025-35dcf8d61f8d)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.3343] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.3398] manager[0x55fc94a3b000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 02 11:05:59 compute-0 systemd[1]: Starting Hostname Service...
Feb 02 11:05:59 compute-0 systemd[1]: Started Hostname Service.
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4114] hostname: hostname: using hostnamed
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4114] hostname: static hostname changed from (none) to "compute-0"
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4120] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4125] manager[0x55fc94a3b000]: rfkill: Wi-Fi hardware radio set enabled
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4125] manager[0x55fc94a3b000]: rfkill: WWAN hardware radio set enabled
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4145] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4152] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4153] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4154] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4154] manager: Networking is enabled by state file
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4156] settings: Loaded settings plugin: keyfile (internal)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4159] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4187] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4196] dhcp: init: Using DHCP client 'internal'
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4198] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4203] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4207] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4214] device (lo): Activation: starting connection 'lo' (4fdf3fa4-43de-4352-8e3d-ab6325fd58e4)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4219] device (eth0): carrier: link connected
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4222] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4225] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4226] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4235] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4242] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4249] device (eth1): carrier: link connected
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4252] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4257] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (197a2725-4d03-536f-a6da-d1aac3072b16) (indicated)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4258] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4262] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4268] device (eth1): Activation: starting connection 'ci-private-network' (197a2725-4d03-536f-a6da-d1aac3072b16)
Feb 02 11:05:59 compute-0 systemd[1]: Started Network Manager.
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4273] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4282] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4284] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4287] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4289] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4292] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4294] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4296] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4300] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4307] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4310] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4317] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4326] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4349] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4357] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 02 11:05:59 compute-0 sudo[49053]: pam_unix(sudo:session): session closed for user root
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4744] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4751] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4752] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4754] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4760] device (lo): Activation: successful, device activated.
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4781] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4785] manager: NetworkManager state is now CONNECTED_LOCAL
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4788] device (eth1): Activation: successful, device activated.
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4798] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4799] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4802] manager: NetworkManager state is now CONNECTED_SITE
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4804] device (eth0): Activation: successful, device activated.
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4809] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 02 11:05:59 compute-0 NetworkManager[49067]: <info>  [1770030359.4811] manager: startup complete
Feb 02 11:05:59 compute-0 systemd[1]: Starting Network Manager Wait Online...
Feb 02 11:05:59 compute-0 systemd[1]: Finished Network Manager Wait Online.
Feb 02 11:05:59 compute-0 sudo[49279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtoxpmucknbitsdpgoembwpqyhfllvuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030359.652568-459-166605087701144/AnsiballZ_dnf.py'
Feb 02 11:05:59 compute-0 sudo[49279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:00 compute-0 python3.9[49281]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:06:04 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 11:06:04 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 11:06:04 compute-0 systemd[1]: Reloading.
Feb 02 11:06:04 compute-0 systemd-rc-local-generator[49332]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:06:04 compute-0 systemd-sysv-generator[49335]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:06:04 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 11:06:06 compute-0 sudo[49279]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 11:06:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 11:06:06 compute-0 systemd[1]: run-rf5d686324b6242719c0eb5ae228226b3.service: Deactivated successfully.
Feb 02 11:06:06 compute-0 sudo[49746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbskognyrjvnxmdnxlvcqndmjhrdbmmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030366.5586216-495-180834025452175/AnsiballZ_stat.py'
Feb 02 11:06:06 compute-0 sudo[49746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:06 compute-0 python3.9[49748]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:06:06 compute-0 sudo[49746]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:07 compute-0 sudo[49898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnygmmykqaondfmsgfuwmxxfdjxatzsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030367.1756263-522-17785745689519/AnsiballZ_ini_file.py'
Feb 02 11:06:07 compute-0 sudo[49898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:07 compute-0 python3.9[49900]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:07 compute-0 sudo[49898]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:08 compute-0 sudo[50052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyzxuofkssrzfqxwtgmujrloipfygyoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030368.0016968-552-212407445658771/AnsiballZ_ini_file.py'
Feb 02 11:06:08 compute-0 sudo[50052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:08 compute-0 python3.9[50054]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:08 compute-0 sudo[50052]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:08 compute-0 sudo[50204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pafimorzywoocmapydfvpwczqpejlzom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030368.5551603-552-162817905710501/AnsiballZ_ini_file.py'
Feb 02 11:06:08 compute-0 sudo[50204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:08 compute-0 python3.9[50206]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:09 compute-0 sudo[50204]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:09 compute-0 sudo[50356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-solonsvixbxfvqlqitgynrryyrijtwcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030369.2156007-597-37953557674334/AnsiballZ_ini_file.py'
Feb 02 11:06:09 compute-0 sudo[50356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:09 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 11:06:09 compute-0 python3.9[50358]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:09 compute-0 sudo[50356]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:10 compute-0 sudo[50508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlvriuymtcndczbbttpfexgltarkhcxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030369.82247-597-74401025870177/AnsiballZ_ini_file.py'
Feb 02 11:06:10 compute-0 sudo[50508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:10 compute-0 python3.9[50510]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:10 compute-0 sudo[50508]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:10 compute-0 sudo[50660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uijtqkimdjnwgmysiyctoaypxehpqjep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030370.3779576-642-235674309306955/AnsiballZ_stat.py'
Feb 02 11:06:10 compute-0 sudo[50660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:10 compute-0 python3.9[50662]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:06:10 compute-0 sudo[50660]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:11 compute-0 sudo[50783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llslyiemihqaavpltmkonccwampxitjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030370.3779576-642-235674309306955/AnsiballZ_copy.py'
Feb 02 11:06:11 compute-0 sudo[50783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:11 compute-0 python3.9[50785]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030370.3779576-642-235674309306955/.source _original_basename=.h87e925_ follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:11 compute-0 sudo[50783]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:11 compute-0 sudo[50935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbwdvpzhisijobudcgdmzypcjendgfii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030371.5816772-687-55535411691303/AnsiballZ_file.py'
Feb 02 11:06:11 compute-0 sudo[50935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:12 compute-0 python3.9[50937]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:12 compute-0 sudo[50935]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:12 compute-0 sudo[51087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewjtxrdesxprxzlciblaubqyodfludgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030372.2091837-711-44697395925353/AnsiballZ_edpm_os_net_config_mappings.py'
Feb 02 11:06:12 compute-0 sudo[51087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:12 compute-0 python3.9[51089]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb 02 11:06:12 compute-0 sudo[51087]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:13 compute-0 sudo[51239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kberckarbkyjewxtpkvamroaewrhaawo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030373.012244-738-191419379924334/AnsiballZ_file.py'
Feb 02 11:06:13 compute-0 sudo[51239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:13 compute-0 python3.9[51241]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:13 compute-0 sudo[51239]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:14 compute-0 sudo[51391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbpqvqqhgdvkhaclnzhxuejdrrvfjnez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030373.9620602-768-31438183899711/AnsiballZ_stat.py'
Feb 02 11:06:14 compute-0 sudo[51391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:14 compute-0 sudo[51391]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:14 compute-0 sudo[51514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpmcjxeokpfguabtgfymsfrhnsttjinj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030373.9620602-768-31438183899711/AnsiballZ_copy.py'
Feb 02 11:06:14 compute-0 sudo[51514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:14 compute-0 sudo[51514]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:15 compute-0 sudo[51666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muqcvgrhhxulufxuhahyhuldhlwfeyjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030375.0388734-813-245998591433441/AnsiballZ_slurp.py'
Feb 02 11:06:15 compute-0 sudo[51666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:15 compute-0 python3.9[51668]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb 02 11:06:15 compute-0 sudo[51666]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:16 compute-0 sudo[51841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqyrexqgqkmmxxanduaralxhfrqbnjgs ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030375.8990765-840-158902517517713/async_wrapper.py j332673910823 300 /home/zuul/.ansible/tmp/ansible-tmp-1770030375.8990765-840-158902517517713/AnsiballZ_edpm_os_net_config.py _'
Feb 02 11:06:16 compute-0 sudo[51841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:16 compute-0 ansible-async_wrapper.py[51843]: Invoked with j332673910823 300 /home/zuul/.ansible/tmp/ansible-tmp-1770030375.8990765-840-158902517517713/AnsiballZ_edpm_os_net_config.py _
Feb 02 11:06:16 compute-0 ansible-async_wrapper.py[51846]: Starting module and watcher
Feb 02 11:06:16 compute-0 ansible-async_wrapper.py[51846]: Start watching 51847 (300)
Feb 02 11:06:16 compute-0 ansible-async_wrapper.py[51847]: Start module (51847)
Feb 02 11:06:16 compute-0 ansible-async_wrapper.py[51843]: Return async_wrapper task started.
Feb 02 11:06:16 compute-0 sudo[51841]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:17 compute-0 python3.9[51848]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Feb 02 11:06:17 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb 02 11:06:17 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb 02 11:06:17 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb 02 11:06:17 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb 02 11:06:17 compute-0 kernel: cfg80211: failed to load regulatory.db
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.7963] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.7992] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8521] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8523] audit: op="connection-add" uuid="1a4bf9a7-892f-41da-85b9-6376582bcbd8" name="br-ex-br" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8536] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8538] audit: op="connection-add" uuid="bb89703b-a8ee-4574-920b-c6b4ec1313b8" name="br-ex-port" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8548] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8549] audit: op="connection-add" uuid="dd533800-8b10-4dca-8b6d-ae8c595e9565" name="eth1-port" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8577] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8580] audit: op="connection-add" uuid="33a1124e-78e4-40e5-8093-9e3a4e64fe42" name="vlan20-port" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8596] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8598] audit: op="connection-add" uuid="2bb94b19-b72b-4e07-9d39-6440dc8b7c7e" name="vlan21-port" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8615] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8617] audit: op="connection-add" uuid="c15b8fa3-22a7-49ee-a193-716c010c1154" name="vlan22-port" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8629] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8639] audit: op="connection-add" uuid="dbbc7af0-1284-40ca-acb5-70c029ee6604" name="vlan23-port" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8664] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,802-3-ethernet.mtu" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8684] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8686] audit: op="connection-add" uuid="67c770eb-a00c-4940-a00f-59d49170610a" name="br-ex-if" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8776] audit: op="connection-update" uuid="197a2725-4d03-536f-a6da-d1aac3072b16" name="ci-private-network" args="connection.master,connection.timestamp,connection.slave-type,connection.controller,connection.port-type,ipv4.dns,ipv4.never-default,ipv4.method,ipv4.addresses,ipv4.routes,ipv4.routing-rules,ovs-interface.type,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,ipv6.addresses,ipv6.routes,ipv6.routing-rules,ovs-external-ids.data" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8805] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8806] audit: op="connection-add" uuid="de0a17b6-0411-4654-863a-ab71c421d2f3" name="vlan20-if" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8823] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8827] audit: op="connection-add" uuid="becb93fe-da36-4860-8115-b20c43d8093a" name="vlan21-if" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8841] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8843] audit: op="connection-add" uuid="1d9a4d34-b794-4f89-882a-095ca638973f" name="vlan22-if" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8856] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8857] audit: op="connection-add" uuid="18c9f68d-2e53-4b89-b80c-65ee911476bb" name="vlan23-if" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8868] audit: op="connection-delete" uuid="bc408819-069b-306f-bf8d-84b09cc827a7" name="Wired connection 1" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8880] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.8884] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8890] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8893] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (1a4bf9a7-892f-41da-85b9-6376582bcbd8)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8894] audit: op="connection-activate" uuid="1a4bf9a7-892f-41da-85b9-6376582bcbd8" name="br-ex-br" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8895] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.8896] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8901] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8904] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (bb89703b-a8ee-4574-920b-c6b4ec1313b8)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8905] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.8906] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8910] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8913] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (dd533800-8b10-4dca-8b6d-ae8c595e9565)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8915] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.8919] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8929] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8933] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (33a1124e-78e4-40e5-8093-9e3a4e64fe42)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8936] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.8938] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8944] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8950] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2bb94b19-b72b-4e07-9d39-6440dc8b7c7e)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8952] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.8953] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8959] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8963] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c15b8fa3-22a7-49ee-a193-716c010c1154)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8964] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.8965] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8969] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8973] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (dbbc7af0-1284-40ca-acb5-70c029ee6604)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8973] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8975] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8978] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8987] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.8988] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8991] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8995] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (67c770eb-a00c-4940-a00f-59d49170610a)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8996] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.8999] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9001] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9002] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9003] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9014] device (eth1): disconnecting for new activation request.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9014] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9017] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9019] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9020] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9022] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.9023] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9027] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9032] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (de0a17b6-0411-4654-863a-ab71c421d2f3)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9033] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9036] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9038] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9039] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9042] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.9043] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9048] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9053] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (becb93fe-da36-4860-8115-b20c43d8093a)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9054] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9056] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9057] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9058] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9062] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.9063] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9066] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9071] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (1d9a4d34-b794-4f89-882a-095ca638973f)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9072] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9075] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9077] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9078] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9081] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <warn>  [1770030378.9082] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9085] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9090] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (18c9f68d-2e53-4b89-b80c-65ee911476bb)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9091] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9095] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9097] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9099] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9101] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9115] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9117] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9121] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9123] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9129] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9133] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9137] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 kernel: ovs-system: entered promiscuous mode
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9141] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9143] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9157] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9163] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9167] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 kernel: Timeout policy base is empty
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9177] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9182] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 systemd-udevd[51853]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9186] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9189] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9191] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9196] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9201] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9204] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9206] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9211] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9216] dhcp4 (eth0): canceled DHCP transaction
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9219] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9219] dhcp4 (eth0): state changed no lease
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9220] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9230] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9236] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51849 uid=0 result="fail" reason="Device is not activated"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9241] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb 02 11:06:18 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 11:06:18 compute-0 kernel: br-ex: entered promiscuous mode
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9453] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9458] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Feb 02 11:06:18 compute-0 kernel: vlan21: entered promiscuous mode
Feb 02 11:06:18 compute-0 systemd-udevd[51854]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9477] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb 02 11:06:18 compute-0 kernel: vlan20: entered promiscuous mode
Feb 02 11:06:18 compute-0 systemd-udevd[51946]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:06:18 compute-0 kernel: vlan22: entered promiscuous mode
Feb 02 11:06:18 compute-0 kernel: vlan23: entered promiscuous mode
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9630] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9756] device (eth1): Activation: starting connection 'ci-private-network' (197a2725-4d03-536f-a6da-d1aac3072b16)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9761] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9762] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9763] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9765] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9766] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9767] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9768] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9772] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9777] device (eth1): disconnecting for new activation request.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9778] audit: op="connection-activate" uuid="197a2725-4d03-536f-a6da-d1aac3072b16" name="ci-private-network" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9781] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9803] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9815] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9823] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9826] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9833] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9839] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9842] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9846] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9850] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9854] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9858] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9864] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9868] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9872] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9876] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9879] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9884] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9895] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9898] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9898] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9904] device (eth1): Activation: starting connection 'ci-private-network' (197a2725-4d03-536f-a6da-d1aac3072b16)
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9931] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9935] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9941] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51849 uid=0 result="success"
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9955] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9963] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:18 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9986] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb 02 11:06:18 compute-0 NetworkManager[49067]: <info>  [1770030378.9990] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0000] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0007] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0015] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0033] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0038] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0039] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0041] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0042] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0043] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0044] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0050] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0055] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0060] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0068] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0073] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0080] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0085] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0091] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0095] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0105] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0106] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 11:06:19 compute-0 NetworkManager[49067]: <info>  [1770030379.0112] device (eth1): Activation: successful, device activated.
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.1916] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51849 uid=0 result="success"
Feb 02 11:06:20 compute-0 sudo[52210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scpmtuqijyrlrrxxkfhyyxlkjtpopwzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030379.9871776-840-61695388053053/AnsiballZ_async_status.py'
Feb 02 11:06:20 compute-0 sudo[52210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.3418] checkpoint[0x55fc94a11950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.3421] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51849 uid=0 result="success"
Feb 02 11:06:20 compute-0 python3.9[52212]: ansible-ansible.legacy.async_status Invoked with jid=j332673910823.51843 mode=status _async_dir=/root/.ansible_async
Feb 02 11:06:20 compute-0 sudo[52210]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.6371] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51849 uid=0 result="success"
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.6381] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51849 uid=0 result="success"
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.8289] audit: op="networking-control" arg="global-dns-configuration" pid=51849 uid=0 result="success"
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.8316] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.8339] audit: op="networking-control" arg="global-dns-configuration" pid=51849 uid=0 result="success"
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.8359] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51849 uid=0 result="success"
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.9679] checkpoint[0x55fc94a11a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb 02 11:06:20 compute-0 NetworkManager[49067]: <info>  [1770030380.9684] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51849 uid=0 result="success"
Feb 02 11:06:21 compute-0 ansible-async_wrapper.py[51847]: Module complete (51847)
Feb 02 11:06:21 compute-0 ansible-async_wrapper.py[51846]: Done in kid B.
Feb 02 11:06:23 compute-0 sudo[52316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tflandbdsvgnamnbluawkblacpakzzvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030379.9871776-840-61695388053053/AnsiballZ_async_status.py'
Feb 02 11:06:23 compute-0 sudo[52316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:23 compute-0 python3.9[52318]: ansible-ansible.legacy.async_status Invoked with jid=j332673910823.51843 mode=status _async_dir=/root/.ansible_async
Feb 02 11:06:23 compute-0 sudo[52316]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:24 compute-0 sudo[52415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjimffskcsizliinpilfafpealbxmrjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030379.9871776-840-61695388053053/AnsiballZ_async_status.py'
Feb 02 11:06:24 compute-0 sudo[52415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:24 compute-0 python3.9[52417]: ansible-ansible.legacy.async_status Invoked with jid=j332673910823.51843 mode=cleanup _async_dir=/root/.ansible_async
Feb 02 11:06:24 compute-0 sudo[52415]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:24 compute-0 sudo[52568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuwmbdaetmkgbzezddeadsddrcwvyhoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030384.5989401-921-132512167775588/AnsiballZ_stat.py'
Feb 02 11:06:24 compute-0 sudo[52568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:24 compute-0 python3.9[52570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:06:25 compute-0 sudo[52568]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:25 compute-0 sudo[52691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jacqkgckdptzfsnxblquhlysupsnirtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030384.5989401-921-132512167775588/AnsiballZ_copy.py'
Feb 02 11:06:25 compute-0 sudo[52691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:25 compute-0 python3.9[52693]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030384.5989401-921-132512167775588/.source.returncode _original_basename=.ucb4n0bh follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:25 compute-0 sudo[52691]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:26 compute-0 sudo[52843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgytipyurkmnzhnuyfkajaixekrcobnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030385.7990792-969-172432890851521/AnsiballZ_stat.py'
Feb 02 11:06:26 compute-0 sudo[52843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:26 compute-0 python3.9[52845]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:06:26 compute-0 sudo[52843]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:26 compute-0 sudo[52966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oghclrqpntabjffeuwpdugcjqavjfkko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030385.7990792-969-172432890851521/AnsiballZ_copy.py'
Feb 02 11:06:26 compute-0 sudo[52966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:26 compute-0 python3.9[52968]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030385.7990792-969-172432890851521/.source.cfg _original_basename=.t1qe6r8a follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:26 compute-0 sudo[52966]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:27 compute-0 sudo[53119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uknzapgrxfqtylttfddxdrmfjstdlmqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030387.1801004-1014-272103831857729/AnsiballZ_systemd.py'
Feb 02 11:06:27 compute-0 sudo[53119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:27 compute-0 python3.9[53121]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:06:27 compute-0 systemd[1]: Reloading Network Manager...
Feb 02 11:06:27 compute-0 NetworkManager[49067]: <info>  [1770030387.7932] audit: op="reload" arg="0" pid=53125 uid=0 result="success"
Feb 02 11:06:27 compute-0 NetworkManager[49067]: <info>  [1770030387.7944] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb 02 11:06:27 compute-0 systemd[1]: Reloaded Network Manager.
Feb 02 11:06:27 compute-0 sudo[53119]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:28 compute-0 sshd-session[45066]: Connection closed by 192.168.122.30 port 54220
Feb 02 11:06:28 compute-0 sshd-session[45063]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:06:28 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Feb 02 11:06:28 compute-0 systemd[1]: session-10.scope: Consumed 48.159s CPU time.
Feb 02 11:06:28 compute-0 systemd-logind[793]: Session 10 logged out. Waiting for processes to exit.
Feb 02 11:06:28 compute-0 systemd-logind[793]: Removed session 10.
Feb 02 11:06:29 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 02 11:06:33 compute-0 sshd-session[53158]: Accepted publickey for zuul from 192.168.122.30 port 40892 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:06:33 compute-0 systemd-logind[793]: New session 11 of user zuul.
Feb 02 11:06:33 compute-0 systemd[1]: Started Session 11 of User zuul.
Feb 02 11:06:33 compute-0 sshd-session[53158]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:06:34 compute-0 python3.9[53312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:06:35 compute-0 python3.9[53466]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:06:36 compute-0 python3.9[53659]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:06:36 compute-0 sshd-session[53161]: Connection closed by 192.168.122.30 port 40892
Feb 02 11:06:36 compute-0 sshd-session[53158]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:06:36 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Feb 02 11:06:36 compute-0 systemd[1]: session-11.scope: Consumed 1.872s CPU time.
Feb 02 11:06:36 compute-0 systemd-logind[793]: Session 11 logged out. Waiting for processes to exit.
Feb 02 11:06:36 compute-0 systemd-logind[793]: Removed session 11.
Feb 02 11:06:37 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 11:06:42 compute-0 sshd-session[53688]: Accepted publickey for zuul from 192.168.122.30 port 40182 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:06:42 compute-0 systemd-logind[793]: New session 12 of user zuul.
Feb 02 11:06:42 compute-0 systemd[1]: Started Session 12 of User zuul.
Feb 02 11:06:42 compute-0 sshd-session[53688]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:06:43 compute-0 python3.9[53842]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:06:44 compute-0 python3.9[53996]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:06:45 compute-0 sudo[54150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pebmmyokwigrhxyaidznwenfhxjukvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030404.9264233-75-69838057655770/AnsiballZ_setup.py'
Feb 02 11:06:45 compute-0 sudo[54150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:45 compute-0 python3.9[54152]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:06:45 compute-0 sudo[54150]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:45 compute-0 sudo[54234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhxrpyjycknmkpogqdvgnyuopjllnyjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030404.9264233-75-69838057655770/AnsiballZ_dnf.py'
Feb 02 11:06:45 compute-0 sudo[54234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:46 compute-0 python3.9[54236]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:06:47 compute-0 sudo[54234]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:47 compute-0 sudo[54388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncmpguknsgadcrcmzjuasjrjetptbacl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030407.76975-111-139296885630558/AnsiballZ_setup.py'
Feb 02 11:06:47 compute-0 sudo[54388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:48 compute-0 python3.9[54390]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:06:48 compute-0 sudo[54388]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:49 compute-0 sudo[54583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adasoquarvudwnyrqhxxadazbbvcukkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030408.744732-144-231876033150237/AnsiballZ_file.py'
Feb 02 11:06:49 compute-0 sudo[54583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:49 compute-0 python3.9[54585]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:49 compute-0 sudo[54583]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:49 compute-0 sudo[54736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfzcffjbmibwkqfochsuwwmeozyyyfly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030409.474782-168-118716094990508/AnsiballZ_command.py'
Feb 02 11:06:49 compute-0 sudo[54736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:50 compute-0 python3.9[54738]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2284199177-merged.mount: Deactivated successfully.
Feb 02 11:06:50 compute-0 podman[54739]: 2026-02-02 11:06:50.074595369 +0000 UTC m=+0.039660189 system refresh
Feb 02 11:06:50 compute-0 sudo[54736]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:50 compute-0 sudo[54900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddfvswxjjhpdyrgzydqomrgnqaocmyer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030410.2658162-192-164595168450618/AnsiballZ_stat.py'
Feb 02 11:06:50 compute-0 sudo[54900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:50 compute-0 python3.9[54902]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:06:50 compute-0 sudo[54900]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:06:51 compute-0 sudo[55023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caarbhbhvvthxpnnzmkqelthgayzzqlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030410.2658162-192-164595168450618/AnsiballZ_copy.py'
Feb 02 11:06:51 compute-0 sudo[55023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:51 compute-0 python3.9[55025]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030410.2658162-192-164595168450618/.source.json follow=False _original_basename=podman_network_config.j2 checksum=95bc256b35332db291048ec228995b612fed36bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:06:51 compute-0 sudo[55023]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:51 compute-0 sudo[55175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwunrdsxlnpeghutwfjdkhgwitccednv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030411.691328-237-46461707783949/AnsiballZ_stat.py'
Feb 02 11:06:51 compute-0 sudo[55175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:52 compute-0 python3.9[55177]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:06:52 compute-0 sudo[55175]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:52 compute-0 sudo[55298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elfcdafhqyrjvjiexucdvwmbnlwvjfhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030411.691328-237-46461707783949/AnsiballZ_copy.py'
Feb 02 11:06:52 compute-0 sudo[55298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:52 compute-0 python3.9[55300]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770030411.691328-237-46461707783949/.source.conf follow=False _original_basename=registries.conf.j2 checksum=e7a44be183e9fa7659a301c3109d23944befc809 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:06:52 compute-0 sudo[55298]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:53 compute-0 sudo[55450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftzgjdwlfjuyksejakvdpvmkmrxyvjvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030412.834014-285-228672505465123/AnsiballZ_ini_file.py'
Feb 02 11:06:53 compute-0 sudo[55450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:53 compute-0 python3.9[55452]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:06:53 compute-0 sudo[55450]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:53 compute-0 sudo[55602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aztedyukcuaimipzuxiqtfydlpzebzzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030413.5150821-285-192126727423952/AnsiballZ_ini_file.py'
Feb 02 11:06:53 compute-0 sudo[55602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:53 compute-0 python3.9[55604]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:06:53 compute-0 sudo[55602]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:54 compute-0 sudo[55754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ramqyamecsmehucfarbqttwuntpwhyqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030414.0374706-285-189211449688423/AnsiballZ_ini_file.py'
Feb 02 11:06:54 compute-0 sudo[55754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:54 compute-0 python3.9[55756]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:06:54 compute-0 sudo[55754]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:54 compute-0 sudo[55906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pynmercrfwblgqyqucisctixfnregxur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030414.554343-285-7812684234876/AnsiballZ_ini_file.py'
Feb 02 11:06:54 compute-0 sudo[55906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:54 compute-0 python3.9[55908]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:06:54 compute-0 sudo[55906]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:55 compute-0 sudo[56058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqneizojbbnpmkbwdzebfohtlqdzrhzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030415.3742483-378-233621176439113/AnsiballZ_dnf.py'
Feb 02 11:06:55 compute-0 sudo[56058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:55 compute-0 python3.9[56060]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:06:57 compute-0 sudo[56058]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:57 compute-0 sudo[56211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfsvjxaxddecwszuxjifpqwvlwjyrwfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030417.7362268-411-280674523391998/AnsiballZ_setup.py'
Feb 02 11:06:57 compute-0 sudo[56211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:58 compute-0 python3.9[56213]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:06:58 compute-0 sudo[56211]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:58 compute-0 sudo[56365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sosmiqsdruckzptoqmkpuqumnantszgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030418.4204073-435-34955533978147/AnsiballZ_stat.py'
Feb 02 11:06:58 compute-0 sudo[56365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:58 compute-0 python3.9[56367]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:06:58 compute-0 sudo[56365]: pam_unix(sudo:session): session closed for user root
Feb 02 11:06:59 compute-0 sudo[56517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upkivpkssdjqojzeehttuqcfnpotmtte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030419.0268805-462-240171507083753/AnsiballZ_stat.py'
Feb 02 11:06:59 compute-0 sudo[56517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:06:59 compute-0 python3.9[56519]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:06:59 compute-0 sudo[56517]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:00 compute-0 sudo[56669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrldhmjletjeqqbpxwaazqdxfdufafth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030419.7898042-492-252222579958527/AnsiballZ_command.py'
Feb 02 11:07:00 compute-0 sudo[56669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:00 compute-0 python3.9[56671]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:07:00 compute-0 sudo[56669]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:00 compute-0 sudo[56822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxxkqpfeakxhztfcarakzitpkwkqrine ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030420.5391188-522-278570603594512/AnsiballZ_service_facts.py'
Feb 02 11:07:00 compute-0 sudo[56822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:01 compute-0 python3.9[56824]: ansible-service_facts Invoked
Feb 02 11:07:01 compute-0 network[56841]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 11:07:01 compute-0 network[56842]: 'network-scripts' will be removed from distribution in near future.
Feb 02 11:07:01 compute-0 network[56843]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 11:07:03 compute-0 sudo[56822]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:04 compute-0 sudo[57126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvrwqxtjsnvgvbwzkbfsdaceaqfdyfxe ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1770030423.919917-567-31593169557084/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1770030423.919917-567-31593169557084/args'
Feb 02 11:07:04 compute-0 sudo[57126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:04 compute-0 sudo[57126]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:04 compute-0 sudo[57293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmxbbbucudmgoymrxxparrfdlutvhuwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030424.535271-600-168388266970466/AnsiballZ_dnf.py'
Feb 02 11:07:04 compute-0 sudo[57293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:04 compute-0 python3.9[57295]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:07:06 compute-0 sudo[57293]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:07 compute-0 sudo[57446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eytpddetmfmumycsdowrauibyezetzvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030426.84028-639-136386486385612/AnsiballZ_package_facts.py'
Feb 02 11:07:07 compute-0 sudo[57446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:07 compute-0 python3.9[57448]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 02 11:07:07 compute-0 sudo[57446]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:08 compute-0 sudo[57598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqjlgoybejfzozhjqwdsizeeefflptpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030428.677914-669-101482729848313/AnsiballZ_stat.py'
Feb 02 11:07:08 compute-0 sudo[57598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:09 compute-0 python3.9[57600]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:09 compute-0 sudo[57598]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:09 compute-0 sudo[57723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkxefmidcpjgvkukrccffhdywxocxbjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030428.677914-669-101482729848313/AnsiballZ_copy.py'
Feb 02 11:07:09 compute-0 sudo[57723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:09 compute-0 python3.9[57725]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030428.677914-669-101482729848313/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:09 compute-0 sudo[57723]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:10 compute-0 sudo[57877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dffzjqfxzrargrzmrlyvydnesfgboacy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030429.9551826-714-165512204689694/AnsiballZ_stat.py'
Feb 02 11:07:10 compute-0 sudo[57877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:10 compute-0 python3.9[57879]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:10 compute-0 sudo[57877]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:10 compute-0 sudo[58002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmcvkhgmtouwknugnjuerdrzgwnwgrop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030429.9551826-714-165512204689694/AnsiballZ_copy.py'
Feb 02 11:07:10 compute-0 sudo[58002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:10 compute-0 python3.9[58004]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030429.9551826-714-165512204689694/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:10 compute-0 sudo[58002]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:12 compute-0 sudo[58156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njfndgqmsnlwwqydinxqpdbnbcshxlei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030431.9993634-777-279204766076229/AnsiballZ_lineinfile.py'
Feb 02 11:07:12 compute-0 sudo[58156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:12 compute-0 python3.9[58158]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:12 compute-0 sudo[58156]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:13 compute-0 sudo[58310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvotuekinbniyyufmiynqqyisxgfhcdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030433.715155-822-280241928905077/AnsiballZ_setup.py'
Feb 02 11:07:13 compute-0 sudo[58310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:14 compute-0 python3.9[58312]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:07:14 compute-0 sudo[58310]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:14 compute-0 sudo[58394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bddscsrnodoycrfzhhmwllzpyxilfsuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030433.715155-822-280241928905077/AnsiballZ_systemd.py'
Feb 02 11:07:14 compute-0 sudo[58394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:15 compute-0 python3.9[58396]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:07:15 compute-0 sudo[58394]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:16 compute-0 sudo[58548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-przwshwitmcfdpyxfewsnpecqqwffdap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030436.0436876-870-161486361003660/AnsiballZ_setup.py'
Feb 02 11:07:16 compute-0 sudo[58548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:16 compute-0 python3.9[58550]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:07:16 compute-0 sudo[58548]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:17 compute-0 sudo[58632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osdenokqnsjspqnpijjzqbemeyuhvpky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030436.0436876-870-161486361003660/AnsiballZ_systemd.py'
Feb 02 11:07:17 compute-0 sudo[58632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:17 compute-0 python3.9[58634]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:07:17 compute-0 chronyd[795]: chronyd exiting
Feb 02 11:07:17 compute-0 systemd[1]: Stopping NTP client/server...
Feb 02 11:07:17 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Feb 02 11:07:17 compute-0 systemd[1]: Stopped NTP client/server.
Feb 02 11:07:17 compute-0 systemd[1]: Starting NTP client/server...
Feb 02 11:07:17 compute-0 chronyd[58642]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 02 11:07:17 compute-0 chronyd[58642]: Frequency -28.557 +/- 0.312 ppm read from /var/lib/chrony/drift
Feb 02 11:07:17 compute-0 chronyd[58642]: Loaded seccomp filter (level 2)
Feb 02 11:07:17 compute-0 systemd[1]: Started NTP client/server.
Feb 02 11:07:17 compute-0 sudo[58632]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:17 compute-0 sshd-session[53691]: Connection closed by 192.168.122.30 port 40182
Feb 02 11:07:17 compute-0 sshd-session[53688]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:07:17 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Feb 02 11:07:17 compute-0 systemd[1]: session-12.scope: Consumed 21.387s CPU time.
Feb 02 11:07:17 compute-0 systemd-logind[793]: Session 12 logged out. Waiting for processes to exit.
Feb 02 11:07:17 compute-0 systemd-logind[793]: Removed session 12.
Feb 02 11:07:23 compute-0 sshd-session[58668]: Accepted publickey for zuul from 192.168.122.30 port 45038 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:07:23 compute-0 systemd-logind[793]: New session 13 of user zuul.
Feb 02 11:07:23 compute-0 systemd[1]: Started Session 13 of User zuul.
Feb 02 11:07:23 compute-0 sshd-session[58668]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:07:24 compute-0 sudo[58821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucymiegqghxnbntjzoekyxlyqhfksilw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030444.0717716-21-274659142687981/AnsiballZ_file.py'
Feb 02 11:07:24 compute-0 sudo[58821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:24 compute-0 python3.9[58823]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:24 compute-0 sudo[58821]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:25 compute-0 sudo[58973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nueolcrzfatojgjghvlbnnpknpfirihl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030444.8685381-57-61761044969949/AnsiballZ_stat.py'
Feb 02 11:07:25 compute-0 sudo[58973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:25 compute-0 python3.9[58975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:25 compute-0 sudo[58973]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:25 compute-0 sudo[59096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caugwzlebmwhyadsywzizdzpnecmneqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030444.8685381-57-61761044969949/AnsiballZ_copy.py'
Feb 02 11:07:25 compute-0 sudo[59096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:26 compute-0 python3.9[59098]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030444.8685381-57-61761044969949/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:26 compute-0 sudo[59096]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:26 compute-0 sshd-session[58671]: Connection closed by 192.168.122.30 port 45038
Feb 02 11:07:26 compute-0 sshd-session[58668]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:07:26 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Feb 02 11:07:26 compute-0 systemd[1]: session-13.scope: Consumed 1.275s CPU time.
Feb 02 11:07:26 compute-0 systemd-logind[793]: Session 13 logged out. Waiting for processes to exit.
Feb 02 11:07:26 compute-0 systemd-logind[793]: Removed session 13.
Feb 02 11:07:31 compute-0 sshd-session[59123]: Accepted publickey for zuul from 192.168.122.30 port 50852 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:07:31 compute-0 systemd-logind[793]: New session 14 of user zuul.
Feb 02 11:07:31 compute-0 systemd[1]: Started Session 14 of User zuul.
Feb 02 11:07:31 compute-0 sshd-session[59123]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:07:32 compute-0 python3.9[59276]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:07:33 compute-0 sudo[59430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryrdsinaaunraciqykdbejqtnsgfkhjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030453.082159-54-97539345623686/AnsiballZ_file.py'
Feb 02 11:07:33 compute-0 sudo[59430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:33 compute-0 python3.9[59432]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:33 compute-0 sudo[59430]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:34 compute-0 sudo[59605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsxeqzqeflzwquknirnflrckpjkskoyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030453.746048-78-118236111802587/AnsiballZ_stat.py'
Feb 02 11:07:34 compute-0 sudo[59605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:34 compute-0 python3.9[59607]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:34 compute-0 sudo[59605]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:34 compute-0 sudo[59728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwhgeaojlgkkrejhnzaehmeqxkdpkagi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030453.746048-78-118236111802587/AnsiballZ_copy.py'
Feb 02 11:07:34 compute-0 sudo[59728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:34 compute-0 python3.9[59730]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1770030453.746048-78-118236111802587/.source.json _original_basename=.jabjh58n follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:34 compute-0 sudo[59728]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:35 compute-0 sudo[59880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwxsmeaesjrkxwkvqozcbakimpmqfnmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030455.3869522-147-32243060514505/AnsiballZ_stat.py'
Feb 02 11:07:35 compute-0 sudo[59880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:35 compute-0 python3.9[59882]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:35 compute-0 sudo[59880]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:36 compute-0 sudo[60003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaciicyzdyqabsdshhjxqmvbqhapjwlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030455.3869522-147-32243060514505/AnsiballZ_copy.py'
Feb 02 11:07:36 compute-0 sudo[60003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:36 compute-0 python3.9[60005]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030455.3869522-147-32243060514505/.source _original_basename=.so3ed5jo follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:36 compute-0 sudo[60003]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:36 compute-0 sudo[60155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpvmdylleykuinckvegoatehteeoypne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030456.4621353-195-143026083344955/AnsiballZ_file.py'
Feb 02 11:07:36 compute-0 sudo[60155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:36 compute-0 python3.9[60157]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:07:36 compute-0 sudo[60155]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:37 compute-0 sudo[60307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhjvorktqwjtssvuwdhaqiofnbmgbcdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030457.0405478-219-77855452359885/AnsiballZ_stat.py'
Feb 02 11:07:37 compute-0 sudo[60307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:37 compute-0 python3.9[60309]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:37 compute-0 sudo[60307]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:37 compute-0 sudo[60430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnlaliwbrfkchlraqqabfuseonojklip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030457.0405478-219-77855452359885/AnsiballZ_copy.py'
Feb 02 11:07:37 compute-0 sudo[60430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:37 compute-0 python3.9[60432]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770030457.0405478-219-77855452359885/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:07:37 compute-0 sudo[60430]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:38 compute-0 sudo[60582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beyoudqcqotbvivqfkllzbqxikubhrbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030458.0097964-219-148396045299570/AnsiballZ_stat.py'
Feb 02 11:07:38 compute-0 sudo[60582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:38 compute-0 python3.9[60584]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:38 compute-0 sudo[60582]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:38 compute-0 sudo[60705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccikvxjizewuycyxcfwwbivdrtlfywxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030458.0097964-219-148396045299570/AnsiballZ_copy.py'
Feb 02 11:07:38 compute-0 sudo[60705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:38 compute-0 python3.9[60707]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770030458.0097964-219-148396045299570/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:07:38 compute-0 sudo[60705]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:39 compute-0 sudo[60857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgkyptqfszjvsrusvdezgorubyysmory ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030459.0610225-306-190853401137978/AnsiballZ_file.py'
Feb 02 11:07:39 compute-0 sudo[60857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:39 compute-0 python3.9[60859]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:39 compute-0 sudo[60857]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:39 compute-0 sudo[61009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uumlauqrmblyhfllvhmuglbrjjowvacw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030459.6449108-330-113386215021068/AnsiballZ_stat.py'
Feb 02 11:07:39 compute-0 sudo[61009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:40 compute-0 python3.9[61011]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:40 compute-0 sudo[61009]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:40 compute-0 sudo[61132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wltgpbypredmmgmfudluzppdsenharcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030459.6449108-330-113386215021068/AnsiballZ_copy.py'
Feb 02 11:07:40 compute-0 sudo[61132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:40 compute-0 python3.9[61134]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030459.6449108-330-113386215021068/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:40 compute-0 sudo[61132]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:40 compute-0 sudo[61284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lslfwmbbmnrqkwppdjeierjxvjgaquva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030460.672135-375-144654765859964/AnsiballZ_stat.py'
Feb 02 11:07:40 compute-0 sudo[61284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:41 compute-0 python3.9[61286]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:41 compute-0 sudo[61284]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:41 compute-0 sudo[61407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crgriltnytabdstihftaftakhkrjjpix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030460.672135-375-144654765859964/AnsiballZ_copy.py'
Feb 02 11:07:41 compute-0 sudo[61407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:41 compute-0 python3.9[61409]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030460.672135-375-144654765859964/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:41 compute-0 sudo[61407]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:42 compute-0 sudo[61559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfcrjjgfbhwptfdaukhmbflbhlvqbzef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030461.6922188-420-3848758883612/AnsiballZ_systemd.py'
Feb 02 11:07:42 compute-0 sudo[61559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:42 compute-0 python3.9[61561]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:07:42 compute-0 systemd[1]: Reloading.
Feb 02 11:07:42 compute-0 systemd-rc-local-generator[61586]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:07:42 compute-0 systemd-sysv-generator[61589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:07:42 compute-0 systemd[1]: Reloading.
Feb 02 11:07:42 compute-0 systemd-rc-local-generator[61625]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:07:42 compute-0 systemd-sysv-generator[61629]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:07:42 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Feb 02 11:07:42 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Feb 02 11:07:42 compute-0 sudo[61559]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:43 compute-0 sudo[61786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjhpxbpgjkimfmhqodhbujhorowczydl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030463.1404974-444-4257180631870/AnsiballZ_stat.py'
Feb 02 11:07:43 compute-0 sudo[61786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:43 compute-0 python3.9[61788]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:43 compute-0 sudo[61786]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:43 compute-0 sudo[61909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edegjcpascjuffwnccczcbjkriclhziu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030463.1404974-444-4257180631870/AnsiballZ_copy.py'
Feb 02 11:07:43 compute-0 sudo[61909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:44 compute-0 python3.9[61911]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030463.1404974-444-4257180631870/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:44 compute-0 sudo[61909]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:44 compute-0 sudo[62061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oepcpzrjgbxgtvdjjldndzpsdvhilryt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030464.1924713-489-96544855278672/AnsiballZ_stat.py'
Feb 02 11:07:44 compute-0 sudo[62061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:44 compute-0 python3.9[62063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:44 compute-0 sudo[62061]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:44 compute-0 sudo[62184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqopoifosynbkdzgtqorxiihjhyfapnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030464.1924713-489-96544855278672/AnsiballZ_copy.py'
Feb 02 11:07:44 compute-0 sudo[62184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:45 compute-0 python3.9[62186]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030464.1924713-489-96544855278672/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:45 compute-0 sudo[62184]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:45 compute-0 sudo[62336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klfeuzdndudfwqcpmlojeotqhchlrajt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030465.2288532-534-58091912336436/AnsiballZ_systemd.py'
Feb 02 11:07:45 compute-0 sudo[62336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:45 compute-0 python3.9[62338]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:07:45 compute-0 systemd[1]: Reloading.
Feb 02 11:07:45 compute-0 systemd-rc-local-generator[62361]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:07:45 compute-0 systemd-sysv-generator[62367]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:07:45 compute-0 systemd[1]: Reloading.
Feb 02 11:07:46 compute-0 systemd-rc-local-generator[62399]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:07:46 compute-0 systemd-sysv-generator[62404]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:07:46 compute-0 systemd[1]: Starting Create netns directory...
Feb 02 11:07:46 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 02 11:07:46 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 02 11:07:46 compute-0 systemd[1]: Finished Create netns directory.
Feb 02 11:07:46 compute-0 sudo[62336]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:46 compute-0 python3.9[62562]: ansible-ansible.builtin.service_facts Invoked
Feb 02 11:07:46 compute-0 network[62579]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 11:07:46 compute-0 network[62580]: 'network-scripts' will be removed from distribution in near future.
Feb 02 11:07:46 compute-0 network[62581]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 11:07:49 compute-0 sudo[62841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuvwwhgedjkmmokorvspgvtakepvqycd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030469.0373745-582-144974791878608/AnsiballZ_systemd.py'
Feb 02 11:07:49 compute-0 sudo[62841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:49 compute-0 python3.9[62843]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:07:49 compute-0 systemd[1]: Reloading.
Feb 02 11:07:49 compute-0 systemd-rc-local-generator[62869]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:07:49 compute-0 systemd-sysv-generator[62876]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:07:49 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Feb 02 11:07:50 compute-0 iptables.init[62883]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb 02 11:07:50 compute-0 iptables.init[62883]: iptables: Flushing firewall rules: [  OK  ]
Feb 02 11:07:50 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Feb 02 11:07:50 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Feb 02 11:07:50 compute-0 sudo[62841]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:50 compute-0 sudo[63077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzdmtatzhclyswjunrenpnmsrbamfkkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030470.2347648-582-280901360037198/AnsiballZ_systemd.py'
Feb 02 11:07:50 compute-0 sudo[63077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:50 compute-0 python3.9[63079]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:07:50 compute-0 sudo[63077]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:51 compute-0 sudo[63231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwbuzzqgafwqjhjvikcgvwvqbbnrieec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030471.2383537-630-136917426128853/AnsiballZ_systemd.py'
Feb 02 11:07:51 compute-0 sudo[63231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:51 compute-0 python3.9[63233]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:07:51 compute-0 systemd[1]: Reloading.
Feb 02 11:07:51 compute-0 systemd-rc-local-generator[63259]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:07:51 compute-0 systemd-sysv-generator[63263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:07:52 compute-0 systemd[1]: Starting Netfilter Tables...
Feb 02 11:07:52 compute-0 systemd[1]: Finished Netfilter Tables.
Feb 02 11:07:52 compute-0 sudo[63231]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:52 compute-0 sudo[63422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfzzaichjappicddddwkpjzwvgnsltul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030472.2014287-654-251919479501666/AnsiballZ_command.py'
Feb 02 11:07:52 compute-0 sudo[63422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:52 compute-0 python3.9[63424]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:07:52 compute-0 sudo[63422]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:53 compute-0 sudo[63575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxynmagdbkezpaupbdidfdquwzdpuega ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030473.5530853-696-50776311317384/AnsiballZ_stat.py'
Feb 02 11:07:53 compute-0 sudo[63575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:53 compute-0 python3.9[63577]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:54 compute-0 sudo[63575]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:54 compute-0 sudo[63700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jabbyogtfepfremsjnxfvxfiofdqscei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030473.5530853-696-50776311317384/AnsiballZ_copy.py'
Feb 02 11:07:54 compute-0 sudo[63700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:54 compute-0 python3.9[63702]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030473.5530853-696-50776311317384/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:54 compute-0 sudo[63700]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:54 compute-0 sudo[63853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjywzrsdqohyfuarplwyuhmbxakuadfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030474.7087617-741-249687989627306/AnsiballZ_systemd.py'
Feb 02 11:07:54 compute-0 sudo[63853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:55 compute-0 python3.9[63855]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:07:55 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Feb 02 11:07:55 compute-0 sshd[1007]: Received SIGHUP; restarting.
Feb 02 11:07:55 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Feb 02 11:07:55 compute-0 sshd[1007]: Server listening on :: port 22.
Feb 02 11:07:55 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Feb 02 11:07:55 compute-0 sudo[63853]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:55 compute-0 sudo[64009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itlmluaqmttrvyxziudcqxndrtbxtclx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030475.55489-765-78839154497632/AnsiballZ_file.py'
Feb 02 11:07:55 compute-0 sudo[64009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:55 compute-0 python3.9[64011]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:55 compute-0 sudo[64009]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:56 compute-0 sudo[64161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxpjqvbaaozbbnfyvuuakyfdjolclfwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030476.2177393-789-194604193261695/AnsiballZ_stat.py'
Feb 02 11:07:56 compute-0 sudo[64161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:56 compute-0 python3.9[64163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:56 compute-0 sudo[64161]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:56 compute-0 sudo[64284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwjgyezfgjtgzwnqvohyinmovfuhgpnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030476.2177393-789-194604193261695/AnsiballZ_copy.py'
Feb 02 11:07:56 compute-0 sudo[64284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:57 compute-0 python3.9[64286]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030476.2177393-789-194604193261695/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:57 compute-0 sudo[64284]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:58 compute-0 sudo[64436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxymmhsolunenxtciujgbkclwbyeajdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030477.8386607-843-148105909674121/AnsiballZ_timezone.py'
Feb 02 11:07:58 compute-0 sudo[64436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:58 compute-0 python3.9[64438]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 02 11:07:58 compute-0 systemd[1]: Starting Time & Date Service...
Feb 02 11:07:58 compute-0 systemd[1]: Started Time & Date Service.
Feb 02 11:07:58 compute-0 sudo[64436]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:59 compute-0 sudo[64592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abnqwbdxidqvbgmhmrsgwyxqlrvikgku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030478.8768668-870-155452702429928/AnsiballZ_file.py'
Feb 02 11:07:59 compute-0 sudo[64592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:59 compute-0 python3.9[64594]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:07:59 compute-0 sudo[64592]: pam_unix(sudo:session): session closed for user root
Feb 02 11:07:59 compute-0 sudo[64744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwfhzznwusnobzgorncrglvvlnsowhdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030479.554469-894-75748582562053/AnsiballZ_stat.py'
Feb 02 11:07:59 compute-0 sudo[64744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:07:59 compute-0 python3.9[64746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:07:59 compute-0 sudo[64744]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:00 compute-0 sudo[64867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phfwhwdrjaxnebnunvdumfoqxqnhitnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030479.554469-894-75748582562053/AnsiballZ_copy.py'
Feb 02 11:08:00 compute-0 sudo[64867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:00 compute-0 python3.9[64869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030479.554469-894-75748582562053/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:00 compute-0 sudo[64867]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:00 compute-0 sudo[65019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtxxoxuzyeivpavhbincrxkhjnkevvpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030480.754552-939-170278280915158/AnsiballZ_stat.py'
Feb 02 11:08:00 compute-0 sudo[65019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:01 compute-0 python3.9[65021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:08:01 compute-0 sudo[65019]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:01 compute-0 sudo[65142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuhgjezdvseurscnuhqgafezpfgiqrjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030480.754552-939-170278280915158/AnsiballZ_copy.py'
Feb 02 11:08:01 compute-0 sudo[65142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:01 compute-0 python3.9[65144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030480.754552-939-170278280915158/.source.yaml _original_basename=.mjnje3js follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:01 compute-0 sudo[65142]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:02 compute-0 sudo[65294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouncmfyvmmvwiznmpqzgaerqmlwhrsdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030481.8755848-984-28742139410307/AnsiballZ_stat.py'
Feb 02 11:08:02 compute-0 sudo[65294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:02 compute-0 python3.9[65296]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:08:02 compute-0 sudo[65294]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:02 compute-0 sudo[65417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlcdkblzactpsjdnuqjndafrvovaizrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030481.8755848-984-28742139410307/AnsiballZ_copy.py'
Feb 02 11:08:02 compute-0 sudo[65417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:02 compute-0 python3.9[65419]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030481.8755848-984-28742139410307/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:02 compute-0 sudo[65417]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:03 compute-0 sudo[65569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jonbzcdxudjqcikqwmkzjgtillvqxmhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030483.0345898-1029-123425133835784/AnsiballZ_command.py'
Feb 02 11:08:03 compute-0 sudo[65569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:03 compute-0 python3.9[65571]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:08:03 compute-0 sudo[65569]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:03 compute-0 sudo[65722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxiplfvevdornwmdmplkimvrkyxjwhnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030483.6494312-1053-208910453366594/AnsiballZ_command.py'
Feb 02 11:08:03 compute-0 sudo[65722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:04 compute-0 python3.9[65724]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:08:04 compute-0 sudo[65722]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:04 compute-0 sudo[65875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjfcnrtfbqygmlwfamqhlebfjquscpfm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770030484.4819536-1077-126804731058398/AnsiballZ_edpm_nftables_from_files.py'
Feb 02 11:08:04 compute-0 sudo[65875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:05 compute-0 python3[65877]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 02 11:08:05 compute-0 sudo[65875]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:05 compute-0 sudo[66027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozkerwpnzpoprahwzvmyqxccdbxerdye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030485.3201594-1101-87149471144250/AnsiballZ_stat.py'
Feb 02 11:08:05 compute-0 sudo[66027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:05 compute-0 python3.9[66029]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:08:05 compute-0 sudo[66027]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:06 compute-0 sudo[66150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyyopcwcpmtntuenltyrngpshixmzjcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030485.3201594-1101-87149471144250/AnsiballZ_copy.py'
Feb 02 11:08:06 compute-0 sudo[66150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:06 compute-0 python3.9[66152]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030485.3201594-1101-87149471144250/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:06 compute-0 sudo[66150]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:06 compute-0 sudo[66302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frpdxjyyhrepwnkmugsfcjxemisffxyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030486.465687-1146-35649218052930/AnsiballZ_stat.py'
Feb 02 11:08:06 compute-0 sudo[66302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:06 compute-0 python3.9[66304]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:08:06 compute-0 sudo[66302]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:07 compute-0 sudo[66425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzgeikmbcgubrqmpqwznsphvyjfhqmzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030486.465687-1146-35649218052930/AnsiballZ_copy.py'
Feb 02 11:08:07 compute-0 sudo[66425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:07 compute-0 python3.9[66427]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030486.465687-1146-35649218052930/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:07 compute-0 sudo[66425]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:08 compute-0 sudo[66577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htdweelxnynnzvnjdmjnikkomsnrjzqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030487.7357924-1191-12546464382956/AnsiballZ_stat.py'
Feb 02 11:08:08 compute-0 sudo[66577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:08 compute-0 python3.9[66579]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:08:08 compute-0 sudo[66577]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:08 compute-0 sudo[66700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zerlkapytbqkeyziekavlbgwowifibqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030487.7357924-1191-12546464382956/AnsiballZ_copy.py'
Feb 02 11:08:08 compute-0 sudo[66700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:09 compute-0 python3.9[66702]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030487.7357924-1191-12546464382956/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:09 compute-0 sudo[66700]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:09 compute-0 sudo[66852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pybqhaezilnzltltkusagsifhinfvxoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030489.2979534-1236-184187767797195/AnsiballZ_stat.py'
Feb 02 11:08:09 compute-0 sudo[66852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:09 compute-0 python3.9[66854]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:08:09 compute-0 sudo[66852]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:10 compute-0 sudo[66975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iueigrsodjogwsdhuacgjuiwilwyuqhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030489.2979534-1236-184187767797195/AnsiballZ_copy.py'
Feb 02 11:08:10 compute-0 sudo[66975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:10 compute-0 python3.9[66977]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030489.2979534-1236-184187767797195/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:10 compute-0 sudo[66975]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:11 compute-0 sudo[67127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giiosbogtmniqffwabrztbtzpzwvnlcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030490.8478606-1281-129270829188980/AnsiballZ_stat.py'
Feb 02 11:08:11 compute-0 sudo[67127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:11 compute-0 python3.9[67129]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:08:11 compute-0 sudo[67127]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:11 compute-0 sudo[67250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcmkmahuzhckmmgwffzqsiyuvonwuyax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030490.8478606-1281-129270829188980/AnsiballZ_copy.py'
Feb 02 11:08:11 compute-0 sudo[67250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:11 compute-0 python3.9[67252]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770030490.8478606-1281-129270829188980/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:11 compute-0 sudo[67250]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:12 compute-0 sudo[67402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbzhqseztvdyvecsgglzaprxafnzylri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030492.096976-1326-141879872702933/AnsiballZ_file.py'
Feb 02 11:08:12 compute-0 sudo[67402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:12 compute-0 python3.9[67404]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:12 compute-0 sudo[67402]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:13 compute-0 sudo[67554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcdnvcnuubtnjrwdzvxutfnscvenvvaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030492.783199-1350-17682016206224/AnsiballZ_command.py'
Feb 02 11:08:13 compute-0 sudo[67554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:13 compute-0 python3.9[67556]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:08:13 compute-0 sudo[67554]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:13 compute-0 sudo[67713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfarpjkwhgndriadizbebzbsmmopvuhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030493.449952-1374-17039675810545/AnsiballZ_blockinfile.py'
Feb 02 11:08:13 compute-0 sudo[67713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:14 compute-0 python3.9[67715]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:14 compute-0 sudo[67713]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:15 compute-0 sudo[67866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fccnwflfnwwtuvwotowimxktqpbskgym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030494.7420373-1401-202283201784515/AnsiballZ_file.py'
Feb 02 11:08:15 compute-0 sudo[67866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:15 compute-0 python3.9[67868]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:15 compute-0 sudo[67866]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:15 compute-0 sudo[68018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmuauqkdtdpvmzbctzdwjorkchexazgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030495.3414137-1401-148657310826136/AnsiballZ_file.py'
Feb 02 11:08:15 compute-0 sudo[68018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:15 compute-0 python3.9[68020]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:15 compute-0 sudo[68018]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:16 compute-0 sudo[68170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrctpzorjmcowzshdaxsjjbriacjtiyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030496.0059824-1446-105424527798571/AnsiballZ_mount.py'
Feb 02 11:08:16 compute-0 sudo[68170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:16 compute-0 python3.9[68172]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 02 11:08:16 compute-0 sudo[68170]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:17 compute-0 sudo[68323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svqddmnlmhwbpwuudsrkeebgthyhmzgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030496.8419518-1446-56490185090181/AnsiballZ_mount.py'
Feb 02 11:08:17 compute-0 sudo[68323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:17 compute-0 python3.9[68325]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 02 11:08:17 compute-0 sudo[68323]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:17 compute-0 sshd-session[59126]: Connection closed by 192.168.122.30 port 50852
Feb 02 11:08:17 compute-0 sshd-session[59123]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:08:17 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Feb 02 11:08:17 compute-0 systemd[1]: session-14.scope: Consumed 28.488s CPU time.
Feb 02 11:08:17 compute-0 systemd-logind[793]: Session 14 logged out. Waiting for processes to exit.
Feb 02 11:08:17 compute-0 systemd-logind[793]: Removed session 14.
Feb 02 11:08:23 compute-0 sshd-session[68352]: Accepted publickey for zuul from 192.168.122.30 port 47574 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:08:23 compute-0 systemd-logind[793]: New session 15 of user zuul.
Feb 02 11:08:23 compute-0 systemd[1]: Started Session 15 of User zuul.
Feb 02 11:08:23 compute-0 sshd-session[68352]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:08:23 compute-0 sudo[68505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vambgzabuhoemxywlkitaeklckshazqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030503.3353596-18-35829400876715/AnsiballZ_tempfile.py'
Feb 02 11:08:23 compute-0 sudo[68505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:23 compute-0 python3.9[68507]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 02 11:08:23 compute-0 sudo[68505]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:24 compute-0 sudo[68657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fylbttozqxdtknotzottregvzbkllcdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030504.101621-54-116047632662853/AnsiballZ_stat.py'
Feb 02 11:08:24 compute-0 sudo[68657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:24 compute-0 python3.9[68659]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:08:24 compute-0 sudo[68657]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:25 compute-0 sudo[68809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqbtrzltyouiuhblonyvnhlrzekadfjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030504.9146657-84-239117958176269/AnsiballZ_setup.py'
Feb 02 11:08:25 compute-0 sudo[68809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:25 compute-0 python3.9[68811]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:08:25 compute-0 sudo[68809]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:26 compute-0 sudo[68961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzshtlryxujwpywnwttehzbgshpigitf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030505.9624043-109-219032779499347/AnsiballZ_blockinfile.py'
Feb 02 11:08:26 compute-0 sudo[68961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:26 compute-0 python3.9[68963]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDk+5ipijEJfZ5WmD+HXTsUWj5YiWN8pWe35AO8ELR/LGUVm0CSFF6BVvCEFopx45Tw1wFTKqE8urw/0eWyAJVJxKtGQWkQkJI/2BRPiC4MsjGSd8kDRIhYcX7vGxghEmjhDOdybW3OquAwYpuBFzJ4YXEvem/0hkeOiLJtaWqivo4dDorFRJ5YlUnsmjnpaUVj10wq2ZJCHVNlBv99T5o5iJ36BE4CgrXRBltlXCrEGsC9R58R1VGtPS4RCuEqXsR8ufyuF6mSllD3AZVbZpOlOqfe2tffpgu0CxGfcAatoL7tmDZdvoIWM5efoyDeHPdnQ6c6MRbnC4tPyUmnIQYMJVedoVuJX66kbyQhjuACgISXtZuIOVxTnacvqvfMxfaMtO2sduK7RIOyGnT2RKuKgob04y5yckh41J6M5ETTAQFdoZ9JF743PQaWzEqLPuHAZy0hBOTm0nb0AdqF3DbVhxmbJNxIccZZzzoiJ0NIRUJgJqFUbQ3dsUFUHx5+vs0=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+4ZGWp7pPunMx8hDCys1UmkIeHd7wh2zOj2YREaMmY
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHc0xh0oJhavn08bZVWDGvCk3xdQkfkZDfy91YLoiNcNbbXWnr/ZZCe5hG6OcxwK0MPa/K3qeCkvK8+EkrjtpzM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4oxY9DkZlJW8Zd3S8BqpjvSTZxL/s2VHQtjjR8/n6BHRm4a5xiahUvtPYeRle1OxSKi09NXmOFJmYekmkGepH+psi6i013bF87hachNQjoZ+mxLE6CZfu9g8RNGvzfl7GX74RaQPonoopoQw5K32DztfG0ggbjfQeIXoJyFreB4vVH912pFFtmxf/7OeW+Ghxhr6TuNGAOK4yfn7etQeQrudI1RrDq9XDJWokIIdRU7dUX/5u/LhcIrzBS7jcs1MxvkHxpoGnuDy4hsYsQxzOvtf7aDaJmR1Cf4SACCc5jsTb9yhVDUoBbB8+cbZyEK3ptZnI5rmPpRjaPa7g3DZdtDqVH1iop+xhrn1wyKlkFK6PoOtAonRvoQRp7TeWPE/g35abKE1T327yP9W26gYNJPlbe4gUlqeTqbJYuyAidt0Rbc6r6nRjO04SrPCQkJARMh3ObZXr7IDMT1hpl3qKivNNPACjukJ3jQlnYZXMJayY1/mYayyVL9pPwwDy+dM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAx1kLfZEj7ZTAjGrpHaC9R/HCuuz3C6C6WjmU1a1S4z
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIQHaS+ym6+H9N0T11zZHG2jBaPMwaww7ZKtSls7mtu3Q2EZlUO9FG8bMOF46PodL9W0B6Ns5TuUHhqIq0OEY1Q=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCa6RS7lJapFXYhFEOarliHae3kAlRVB3Mpj5d8pLeBFLetAVeazvG4Dlu3MAf9MvsXouwCREHUStzNdGkA3zeRPtMdqi1ElJ9EAGLn/XEYyjk+SNqMZKkJuXapeu3A3gSjmLfpP/WG8DAzPtOMFavpTPjb2s++Cfvacm/+jCuLUZyqP32CsDfPHLh5ah0lVyJzKYHpJoJkUI+YE1rSj48+IoO462hoSS5gjQQtrDDKzOGcwMu0gKwAWovI2M1Zjd+QtMWwg6LOJdLMqmbc/uLtCiG/fM9Fzid6+WlrNL2UuC/QO1KYMt4HY6UgyIkBeRIlHW5PPIL2YKnP0K+spcn64DWIiz8HlYKgImtdGV+9S/oy1UMo9mD909F+rVfe7z4Odiha5/4yr9Wfqaog7405kUrGkmUJ9+m0VCKwp7imgkCh3ZGmVy9TkZb9EnUPQJZsTKBrITyIpagOxZeLQE3BMHVlwTm9Z1Lo1rkpt8NnV7QJmwYiPWu41RVa+hn22M8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILAjnc0Ts1vxT+icYNashoW2iYerlkwmRX530JvKQ+eU
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDrQv4R+sch1O+OfrWYu+Tr+kYmEx3wGbhS8SiseFMhEjkBOjnQ5br37LVwQalmEoRLwBCczpNGk/ZHNpKcJLd0=
                                             create=True mode=0644 path=/tmp/ansible.zjt24ad6 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:26 compute-0 sudo[68961]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:27 compute-0 sudo[69113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtsbhjagdgcahzwlsyoswitfwwbfmfuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030506.6905427-133-147465489324748/AnsiballZ_command.py'
Feb 02 11:08:27 compute-0 sudo[69113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:27 compute-0 python3.9[69115]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.zjt24ad6' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:08:27 compute-0 sudo[69113]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:28 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 02 11:08:28 compute-0 sudo[69269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uldcxxdyrhuxshchfpdeziinsshqoztk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030508.627052-157-26132578922728/AnsiballZ_file.py'
Feb 02 11:08:28 compute-0 sudo[69269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:29 compute-0 python3.9[69271]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.zjt24ad6 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:29 compute-0 sudo[69269]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:29 compute-0 sshd-session[68355]: Connection closed by 192.168.122.30 port 47574
Feb 02 11:08:29 compute-0 sshd-session[68352]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:08:29 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Feb 02 11:08:29 compute-0 systemd[1]: session-15.scope: Consumed 2.845s CPU time.
Feb 02 11:08:29 compute-0 systemd-logind[793]: Session 15 logged out. Waiting for processes to exit.
Feb 02 11:08:29 compute-0 systemd-logind[793]: Removed session 15.
Feb 02 11:08:34 compute-0 sshd-session[69296]: Accepted publickey for zuul from 192.168.122.30 port 47354 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:08:34 compute-0 systemd-logind[793]: New session 16 of user zuul.
Feb 02 11:08:34 compute-0 systemd[1]: Started Session 16 of User zuul.
Feb 02 11:08:34 compute-0 sshd-session[69296]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:08:35 compute-0 python3.9[69449]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:08:36 compute-0 sudo[69603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gowyvpgondtdnybltxxdddvuourgmxwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030515.6578496-51-269943733981438/AnsiballZ_systemd.py'
Feb 02 11:08:36 compute-0 sudo[69603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:36 compute-0 python3.9[69605]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 02 11:08:36 compute-0 sudo[69603]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:36 compute-0 sudo[69757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjogoojacggzdgttbafhfyumrtpbhpmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030516.7136395-75-263522166051978/AnsiballZ_systemd.py'
Feb 02 11:08:36 compute-0 sudo[69757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:37 compute-0 python3.9[69759]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:08:37 compute-0 sudo[69757]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:37 compute-0 sudo[69910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeohhcicnpqwiruofpbrnusruqaxcrft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030517.501162-102-261746152385516/AnsiballZ_command.py'
Feb 02 11:08:37 compute-0 sudo[69910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:38 compute-0 python3.9[69912]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:08:38 compute-0 sudo[69910]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:38 compute-0 sudo[70063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtnzupjlegqzqjudvvjidzmxzmvqfvzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030518.3515928-126-254897972883971/AnsiballZ_stat.py'
Feb 02 11:08:38 compute-0 sudo[70063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:38 compute-0 python3.9[70065]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:08:38 compute-0 sudo[70063]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:39 compute-0 sudo[70217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocarvhpqjmarieoawebypklaffiyiwjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030519.0385444-150-74327661175715/AnsiballZ_command.py'
Feb 02 11:08:39 compute-0 sudo[70217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:39 compute-0 python3.9[70219]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:08:39 compute-0 sudo[70217]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:40 compute-0 sudo[70372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdstfsusezyyuhqjrqjqwtrxurwzxpze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030519.6739876-174-164863474030696/AnsiballZ_file.py'
Feb 02 11:08:40 compute-0 sudo[70372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:40 compute-0 python3.9[70374]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:08:40 compute-0 sudo[70372]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:40 compute-0 sshd-session[69299]: Connection closed by 192.168.122.30 port 47354
Feb 02 11:08:40 compute-0 sshd-session[69296]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:08:40 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Feb 02 11:08:40 compute-0 systemd[1]: session-16.scope: Consumed 3.688s CPU time.
Feb 02 11:08:40 compute-0 systemd-logind[793]: Session 16 logged out. Waiting for processes to exit.
Feb 02 11:08:40 compute-0 systemd-logind[793]: Removed session 16.
Feb 02 11:08:41 compute-0 sshd-session[70399]: Received disconnect from 91.224.92.78 port 62274:11:  [preauth]
Feb 02 11:08:41 compute-0 sshd-session[70399]: Disconnected from authenticating user root 91.224.92.78 port 62274 [preauth]
Feb 02 11:08:46 compute-0 sshd-session[70402]: Accepted publickey for zuul from 192.168.122.30 port 56508 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:08:46 compute-0 systemd-logind[793]: New session 17 of user zuul.
Feb 02 11:08:46 compute-0 systemd[1]: Started Session 17 of User zuul.
Feb 02 11:08:46 compute-0 sshd-session[70402]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:08:47 compute-0 python3.9[70555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:08:48 compute-0 sudo[70709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-novbtkpfyptqkjglsknptirnjkbnyafi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030527.7871692-57-117402134900584/AnsiballZ_setup.py'
Feb 02 11:08:48 compute-0 sudo[70709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:48 compute-0 python3.9[70711]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:08:48 compute-0 sudo[70709]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:48 compute-0 sudo[70793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnfjxegenhxrklyjwtadcwjpfwpddptc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030527.7871692-57-117402134900584/AnsiballZ_dnf.py'
Feb 02 11:08:48 compute-0 sudo[70793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:08:49 compute-0 python3.9[70795]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 02 11:08:50 compute-0 sudo[70793]: pam_unix(sudo:session): session closed for user root
Feb 02 11:08:51 compute-0 python3.9[70946]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:08:52 compute-0 python3.9[71097]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 11:08:53 compute-0 python3.9[71247]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:08:53 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:08:53 compute-0 python3.9[71398]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:08:54 compute-0 sshd-session[70405]: Connection closed by 192.168.122.30 port 56508
Feb 02 11:08:54 compute-0 sshd-session[70402]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:08:54 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Feb 02 11:08:54 compute-0 systemd[1]: session-17.scope: Consumed 5.283s CPU time.
Feb 02 11:08:54 compute-0 systemd-logind[793]: Session 17 logged out. Waiting for processes to exit.
Feb 02 11:08:54 compute-0 systemd-logind[793]: Removed session 17.
Feb 02 11:09:01 compute-0 anacron[30961]: Job `cron.daily' started
Feb 02 11:09:01 compute-0 anacron[30961]: Job `cron.daily' terminated
Feb 02 11:09:02 compute-0 sshd-session[71425]: Accepted publickey for zuul from 38.102.83.234 port 33630 ssh2: RSA SHA256:f3COXnxExycz7Aj38ISRU64EvYtTxFIG87F84UY80h8
Feb 02 11:09:02 compute-0 systemd-logind[793]: New session 18 of user zuul.
Feb 02 11:09:02 compute-0 systemd[1]: Started Session 18 of User zuul.
Feb 02 11:09:02 compute-0 sshd-session[71425]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:09:02 compute-0 sudo[71501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twlnaqhxyhkrrowxevjsuxlzxjigbrcd ; /usr/bin/python3'
Feb 02 11:09:02 compute-0 sudo[71501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:02 compute-0 useradd[71505]: new group: name=ceph-admin, GID=42478
Feb 02 11:09:02 compute-0 useradd[71505]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Feb 02 11:09:02 compute-0 sudo[71501]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:02 compute-0 sudo[71587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tidwafzwqcdixrfrsnkddwhqjcbytfeo ; /usr/bin/python3'
Feb 02 11:09:02 compute-0 sudo[71587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:02 compute-0 sudo[71587]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:03 compute-0 sudo[71660]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxfmiotjrwbemaerucemekvblkvlgcyu ; /usr/bin/python3'
Feb 02 11:09:03 compute-0 sudo[71660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:03 compute-0 sudo[71660]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:03 compute-0 sudo[71710]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydzrvswiicflykrlrljkvcfqdzeuclht ; /usr/bin/python3'
Feb 02 11:09:03 compute-0 sudo[71710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:03 compute-0 sudo[71710]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:03 compute-0 sudo[71736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vreimwvispnirzqleqaewebjdczgbwnl ; /usr/bin/python3'
Feb 02 11:09:03 compute-0 sudo[71736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:04 compute-0 sudo[71736]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:04 compute-0 sudo[71762]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glwsnbjyfvtxecumtadakmwbtijiahct ; /usr/bin/python3'
Feb 02 11:09:04 compute-0 sudo[71762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:04 compute-0 sudo[71762]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:04 compute-0 sudo[71788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnpyefgqbjfxzgdultwfhpxjhnbdgtqz ; /usr/bin/python3'
Feb 02 11:09:04 compute-0 sudo[71788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:04 compute-0 sudo[71788]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:05 compute-0 sudo[71866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmwadzwftqubmnhfrvcnqdksgopzdlwi ; /usr/bin/python3'
Feb 02 11:09:05 compute-0 sudo[71866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:05 compute-0 sudo[71866]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:05 compute-0 sudo[71939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trrzrexszptsxssldccawnwcwjyfqvbf ; /usr/bin/python3'
Feb 02 11:09:05 compute-0 sudo[71939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:05 compute-0 sudo[71939]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:06 compute-0 sudo[72041]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znajcqihmhoxmrmdephafghecegsxhut ; /usr/bin/python3'
Feb 02 11:09:06 compute-0 sudo[72041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:06 compute-0 sudo[72041]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:06 compute-0 sudo[72114]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eehgbreamaayhuwycjccpbnrlwgfwzrk ; /usr/bin/python3'
Feb 02 11:09:06 compute-0 sudo[72114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:06 compute-0 sudo[72114]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:06 compute-0 sudo[72164]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czczfxmflqetqmlxmxsktlzecqyztsgw ; /usr/bin/python3'
Feb 02 11:09:06 compute-0 sudo[72164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:07 compute-0 python3[72166]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:09:07 compute-0 sudo[72164]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:07 compute-0 sudo[72198]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qerloyzrresbznjuobdzcziinzwrkavt ; /usr/bin/python3'
Feb 02 11:09:07 compute-0 sudo[72198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:07 compute-0 python3[72200]: ansible-ansible.legacy.dnf Invoked with name=['jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 11:09:09 compute-0 sudo[72198]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:09 compute-0 sudo[72225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydhzrjxzccqpnpzurhenorokobsudain ; /usr/bin/python3'
Feb 02 11:09:09 compute-0 sudo[72225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:09 compute-0 python3[72227]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 11:09:11 compute-0 sudo[72225]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:11 compute-0 sudo[72282]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eauxstbalrxgsjdqlfuakldeocqijmkq ; /usr/bin/python3'
Feb 02 11:09:11 compute-0 sudo[72282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:12 compute-0 python3[72284]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 11:09:14 compute-0 groupadd[72294]: group added to /etc/group: name=cephadm, GID=993
Feb 02 11:09:14 compute-0 groupadd[72294]: group added to /etc/gshadow: name=cephadm
Feb 02 11:09:14 compute-0 groupadd[72294]: new group: name=cephadm, GID=993
Feb 02 11:09:14 compute-0 useradd[72301]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Feb 02 11:09:14 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 11:09:14 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 11:09:14 compute-0 sudo[72282]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:15 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 11:09:15 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 11:09:15 compute-0 systemd[1]: run-r2530d67ccea4468ca80adee5f33cfcde.service: Deactivated successfully.
Feb 02 11:09:15 compute-0 sudo[72397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flruzrjlcshxtbcsixwuemqrdgahzqsq ; /usr/bin/python3'
Feb 02 11:09:15 compute-0 sudo[72397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:15 compute-0 python3[72399]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 11:09:16 compute-0 sudo[72397]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:16 compute-0 sudo[72425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tglbjmlphpjxrcxqdbrpommvuhwudeyb ; /usr/bin/python3'
Feb 02 11:09:16 compute-0 sudo[72425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:16 compute-0 python3[72427]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:09:17 compute-0 sudo[72425]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:17 compute-0 sudo[72520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aewxgabiknbjasinigrhzmbyaqiusnja ; /usr/bin/python3'
Feb 02 11:09:17 compute-0 sudo[72520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:17 compute-0 python3[72522]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 11:09:19 compute-0 sudo[72520]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:19 compute-0 sudo[72547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdenfadbtzmfdmxazgcqrzwywvglthgr ; /usr/bin/python3'
Feb 02 11:09:19 compute-0 sudo[72547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:19 compute-0 python3[72549]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 11:09:19 compute-0 sudo[72547]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:19 compute-0 sudo[72573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjizvufmntjykrpnbdaqxdceoyjovnuz ; /usr/bin/python3'
Feb 02 11:09:19 compute-0 sudo[72573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:19 compute-0 python3[72575]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:09:19 compute-0 kernel: loop: module loaded
Feb 02 11:09:19 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Feb 02 11:09:20 compute-0 sudo[72573]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:20 compute-0 sudo[72608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmnxzrhampiqshndfwvcpwuinfysbjkz ; /usr/bin/python3'
Feb 02 11:09:20 compute-0 sudo[72608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:20 compute-0 python3[72610]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:09:20 compute-0 lvm[72613]: PV /dev/loop3 not used.
Feb 02 11:09:20 compute-0 lvm[72622]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:09:20 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Feb 02 11:09:20 compute-0 lvm[72624]:   1 logical volume(s) in volume group "ceph_vg0" now active
Feb 02 11:09:20 compute-0 sudo[72608]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:20 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Feb 02 11:09:20 compute-0 sudo[72700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snkgtuymnodniperjpnwnbspjoqfdsae ; /usr/bin/python3'
Feb 02 11:09:20 compute-0 sudo[72700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:20 compute-0 python3[72702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 11:09:20 compute-0 sudo[72700]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:21 compute-0 sudo[72773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wikhiqbbiynkfuejdwroswwlhuwfadey ; /usr/bin/python3'
Feb 02 11:09:21 compute-0 sudo[72773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:21 compute-0 python3[72775]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770030560.7036495-36990-162348289071331/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:09:21 compute-0 sudo[72773]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:21 compute-0 sudo[72823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynxguervxiidtouifvlckhknpaukxblx ; /usr/bin/python3'
Feb 02 11:09:21 compute-0 sudo[72823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:22 compute-0 python3[72825]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:09:22 compute-0 systemd[1]: Reloading.
Feb 02 11:09:22 compute-0 systemd-rc-local-generator[72856]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:09:22 compute-0 systemd-sysv-generator[72859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:09:22 compute-0 systemd[1]: Starting Ceph OSD losetup...
Feb 02 11:09:22 compute-0 bash[72866]: /dev/loop3: [64513]:4329573 (/var/lib/ceph-osd-0.img)
Feb 02 11:09:22 compute-0 systemd[1]: Finished Ceph OSD losetup.
Feb 02 11:09:22 compute-0 lvm[72867]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:09:22 compute-0 lvm[72867]: VG ceph_vg0 finished
Feb 02 11:09:22 compute-0 sudo[72823]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:24 compute-0 python3[72891]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:09:26 compute-0 sudo[72982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pimftqnsfroijkludcxodlcdyettdouq ; /usr/bin/python3'
Feb 02 11:09:26 compute-0 sudo[72982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:26 compute-0 python3[72984]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 11:09:27 compute-0 chronyd[58642]: Selected source 138.197.135.239 (pool.ntp.org)
Feb 02 11:09:27 compute-0 sudo[72982]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:28 compute-0 sudo[73009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csqarcpqfnekfzjglplmcklepyjjwpqw ; /usr/bin/python3'
Feb 02 11:09:28 compute-0 sudo[73009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:28 compute-0 python3[73011]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 11:09:29 compute-0 sudo[73009]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:29 compute-0 sudo[73036]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anajvayakwpmcvabraedwbaolzawhpue ; /usr/bin/python3'
Feb 02 11:09:29 compute-0 sudo[73036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:29 compute-0 python3[73038]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 11:09:29 compute-0 sudo[73036]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:30 compute-0 sudo[73064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgqgjbbmiufveiirygmyphdmavznbgpo ; /usr/bin/python3'
Feb 02 11:09:30 compute-0 sudo[73064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:30 compute-0 python3[73066]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:09:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:30 compute-0 sudo[73064]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:30 compute-0 sudo[73128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvtfuadbvhjpzekyfgbdwtxntgkuznks ; /usr/bin/python3'
Feb 02 11:09:30 compute-0 sudo[73128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:31 compute-0 python3[73130]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:09:31 compute-0 sudo[73128]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:31 compute-0 sudo[73154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrmvzphryssleorwprmwnloftxogiljw ; /usr/bin/python3'
Feb 02 11:09:31 compute-0 sudo[73154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:31 compute-0 python3[73156]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:09:31 compute-0 sudo[73154]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:31 compute-0 sudo[73232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szllfurpmirrrcvtkzppqbltshfhdixt ; /usr/bin/python3'
Feb 02 11:09:31 compute-0 sudo[73232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:31 compute-0 python3[73234]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 11:09:31 compute-0 sudo[73232]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:32 compute-0 sudo[73305]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktjdtkhvqyudrmiqgwopfxhykzdeialm ; /usr/bin/python3'
Feb 02 11:09:32 compute-0 sudo[73305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:32 compute-0 python3[73307]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770030571.6503398-37182-220315923610545/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:09:32 compute-0 sudo[73305]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:32 compute-0 sudo[73407]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amvmiwkiknpeinzkwagqzxkwwtmbyqxe ; /usr/bin/python3'
Feb 02 11:09:32 compute-0 sudo[73407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:32 compute-0 python3[73409]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 11:09:32 compute-0 sudo[73407]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:32 compute-0 sudo[73480]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkhgrvvdqmusfllzdwzdbagmmdgnklbv ; /usr/bin/python3'
Feb 02 11:09:32 compute-0 sudo[73480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:33 compute-0 python3[73482]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770030572.590225-37200-2749028120473/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:09:33 compute-0 sudo[73480]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:33 compute-0 sudo[73530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgnwxjpqkyocjblsqeonpmcseovsgzip ; /usr/bin/python3'
Feb 02 11:09:33 compute-0 sudo[73530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:33 compute-0 python3[73532]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 11:09:33 compute-0 sudo[73530]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:33 compute-0 sudo[73558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtnzochxkbwgvihbazkjuageeqhfscne ; /usr/bin/python3'
Feb 02 11:09:33 compute-0 sudo[73558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:33 compute-0 python3[73560]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 11:09:33 compute-0 sudo[73558]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:33 compute-0 sudo[73586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mutxlcoihcxoojwuadufmrmibpiukjvt ; /usr/bin/python3'
Feb 02 11:09:33 compute-0 sudo[73586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:34 compute-0 python3[73588]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 11:09:34 compute-0 sudo[73586]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:34 compute-0 python3[73614]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 11:09:34 compute-0 sudo[73638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufggffahyssfplzhsddcoocvlixtpxga ; /usr/bin/python3'
Feb 02 11:09:34 compute-0 sudo[73638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:09:34 compute-0 python3[73640]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:09:34 compute-0 sshd-session[73644]: Accepted publickey for ceph-admin from 192.168.122.100 port 44876 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:09:34 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Feb 02 11:09:34 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 02 11:09:34 compute-0 systemd-logind[793]: New session 19 of user ceph-admin.
Feb 02 11:09:35 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 02 11:09:35 compute-0 systemd[1]: Starting User Manager for UID 42477...
Feb 02 11:09:35 compute-0 systemd[73648]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:09:35 compute-0 systemd[73648]: Queued start job for default target Main User Target.
Feb 02 11:09:35 compute-0 systemd[73648]: Created slice User Application Slice.
Feb 02 11:09:35 compute-0 systemd[73648]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 02 11:09:35 compute-0 systemd[73648]: Started Daily Cleanup of User's Temporary Directories.
Feb 02 11:09:35 compute-0 systemd[73648]: Reached target Paths.
Feb 02 11:09:35 compute-0 systemd[73648]: Reached target Timers.
Feb 02 11:09:35 compute-0 systemd[73648]: Starting D-Bus User Message Bus Socket...
Feb 02 11:09:35 compute-0 systemd[73648]: Starting Create User's Volatile Files and Directories...
Feb 02 11:09:35 compute-0 systemd[73648]: Finished Create User's Volatile Files and Directories.
Feb 02 11:09:35 compute-0 systemd[73648]: Listening on D-Bus User Message Bus Socket.
Feb 02 11:09:35 compute-0 systemd[73648]: Reached target Sockets.
Feb 02 11:09:35 compute-0 systemd[73648]: Reached target Basic System.
Feb 02 11:09:35 compute-0 systemd[73648]: Reached target Main User Target.
Feb 02 11:09:35 compute-0 systemd[73648]: Startup finished in 91ms.
Feb 02 11:09:35 compute-0 systemd[1]: Started User Manager for UID 42477.
Feb 02 11:09:35 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Feb 02 11:09:35 compute-0 sshd-session[73644]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:09:35 compute-0 sudo[73663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Feb 02 11:09:35 compute-0 sudo[73663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:09:35 compute-0 sudo[73663]: pam_unix(sudo:session): session closed for user root
Feb 02 11:09:35 compute-0 sshd-session[73662]: Received disconnect from 192.168.122.100 port 44876:11: disconnected by user
Feb 02 11:09:35 compute-0 sshd-session[73662]: Disconnected from user ceph-admin 192.168.122.100 port 44876
Feb 02 11:09:35 compute-0 sshd-session[73644]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:09:35 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Feb 02 11:09:35 compute-0 systemd-logind[793]: Session 19 logged out. Waiting for processes to exit.
Feb 02 11:09:35 compute-0 systemd-logind[793]: Removed session 19.
Feb 02 11:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat797420871-lower\x2dmapped.mount: Deactivated successfully.
Feb 02 11:09:45 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Feb 02 11:09:45 compute-0 systemd[73648]: Activating special unit Exit the Session...
Feb 02 11:09:45 compute-0 systemd[73648]: Stopped target Main User Target.
Feb 02 11:09:45 compute-0 systemd[73648]: Stopped target Basic System.
Feb 02 11:09:45 compute-0 systemd[73648]: Stopped target Paths.
Feb 02 11:09:45 compute-0 systemd[73648]: Stopped target Sockets.
Feb 02 11:09:45 compute-0 systemd[73648]: Stopped target Timers.
Feb 02 11:09:45 compute-0 systemd[73648]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb 02 11:09:45 compute-0 systemd[73648]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 02 11:09:45 compute-0 systemd[73648]: Closed D-Bus User Message Bus Socket.
Feb 02 11:09:45 compute-0 systemd[73648]: Stopped Create User's Volatile Files and Directories.
Feb 02 11:09:45 compute-0 systemd[73648]: Removed slice User Application Slice.
Feb 02 11:09:45 compute-0 systemd[73648]: Reached target Shutdown.
Feb 02 11:09:45 compute-0 systemd[73648]: Finished Exit the Session.
Feb 02 11:09:45 compute-0 systemd[73648]: Reached target Exit the Session.
Feb 02 11:09:45 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Feb 02 11:09:45 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Feb 02 11:09:45 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb 02 11:09:45 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb 02 11:09:45 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb 02 11:09:45 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb 02 11:09:45 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Feb 02 11:09:51 compute-0 sshd-session[73799]: Invalid user ubuntu from 80.94.92.186 port 40352
Feb 02 11:09:51 compute-0 sshd-session[73799]: Connection closed by invalid user ubuntu 80.94.92.186 port 40352 [preauth]
Feb 02 11:09:55 compute-0 podman[73739]: 2026-02-02 11:09:55.117589733 +0000 UTC m=+19.680400729 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:55 compute-0 podman[73802]: 2026-02-02 11:09:55.17279242 +0000 UTC m=+0.034884397 container create 1d500c29d77c37338fcb2b47086f8a2f1309897c7ca5b679bfedb0832b0e7103 (image=quay.io/ceph/ceph:v19, name=nostalgic_allen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:09:55 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb 02 11:09:55 compute-0 systemd[1]: Started libpod-conmon-1d500c29d77c37338fcb2b47086f8a2f1309897c7ca5b679bfedb0832b0e7103.scope.
Feb 02 11:09:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:09:55 compute-0 podman[73802]: 2026-02-02 11:09:55.253625198 +0000 UTC m=+0.115717195 container init 1d500c29d77c37338fcb2b47086f8a2f1309897c7ca5b679bfedb0832b0e7103 (image=quay.io/ceph/ceph:v19, name=nostalgic_allen, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:09:55 compute-0 podman[73802]: 2026-02-02 11:09:55.158473891 +0000 UTC m=+0.020565898 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:55 compute-0 podman[73802]: 2026-02-02 11:09:55.261379009 +0000 UTC m=+0.123470986 container start 1d500c29d77c37338fcb2b47086f8a2f1309897c7ca5b679bfedb0832b0e7103 (image=quay.io/ceph/ceph:v19, name=nostalgic_allen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:09:55 compute-0 podman[73802]: 2026-02-02 11:09:55.264867439 +0000 UTC m=+0.126959436 container attach 1d500c29d77c37338fcb2b47086f8a2f1309897c7ca5b679bfedb0832b0e7103 (image=quay.io/ceph/ceph:v19, name=nostalgic_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:09:55 compute-0 nostalgic_allen[73819]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Feb 02 11:09:55 compute-0 systemd[1]: libpod-1d500c29d77c37338fcb2b47086f8a2f1309897c7ca5b679bfedb0832b0e7103.scope: Deactivated successfully.
Feb 02 11:09:55 compute-0 podman[73802]: 2026-02-02 11:09:55.356929458 +0000 UTC m=+0.219021435 container died 1d500c29d77c37338fcb2b47086f8a2f1309897c7ca5b679bfedb0832b0e7103 (image=quay.io/ceph/ceph:v19, name=nostalgic_allen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:09:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e2402a4f3ade666ee4339add43428d9283493d4705801dfc46c007d87952224-merged.mount: Deactivated successfully.
Feb 02 11:09:55 compute-0 podman[73802]: 2026-02-02 11:09:55.390587659 +0000 UTC m=+0.252679636 container remove 1d500c29d77c37338fcb2b47086f8a2f1309897c7ca5b679bfedb0832b0e7103 (image=quay.io/ceph/ceph:v19, name=nostalgic_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:09:55 compute-0 systemd[1]: libpod-conmon-1d500c29d77c37338fcb2b47086f8a2f1309897c7ca5b679bfedb0832b0e7103.scope: Deactivated successfully.
Feb 02 11:09:55 compute-0 podman[73836]: 2026-02-02 11:09:55.443632353 +0000 UTC m=+0.037475621 container create 0953bf438e19338c25701a3899e4358f893b6a24e1735ea6b3781cbccfefbf45 (image=quay.io/ceph/ceph:v19, name=heuristic_bardeen, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:09:55 compute-0 systemd[1]: Started libpod-conmon-0953bf438e19338c25701a3899e4358f893b6a24e1735ea6b3781cbccfefbf45.scope.
Feb 02 11:09:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:09:55 compute-0 podman[73836]: 2026-02-02 11:09:55.496506023 +0000 UTC m=+0.090349301 container init 0953bf438e19338c25701a3899e4358f893b6a24e1735ea6b3781cbccfefbf45 (image=quay.io/ceph/ceph:v19, name=heuristic_bardeen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:09:55 compute-0 podman[73836]: 2026-02-02 11:09:55.501547087 +0000 UTC m=+0.095390355 container start 0953bf438e19338c25701a3899e4358f893b6a24e1735ea6b3781cbccfefbf45 (image=quay.io/ceph/ceph:v19, name=heuristic_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:09:55 compute-0 heuristic_bardeen[73852]: 167 167
Feb 02 11:09:55 compute-0 systemd[1]: libpod-0953bf438e19338c25701a3899e4358f893b6a24e1735ea6b3781cbccfefbf45.scope: Deactivated successfully.
Feb 02 11:09:55 compute-0 podman[73836]: 2026-02-02 11:09:55.504791409 +0000 UTC m=+0.098634707 container attach 0953bf438e19338c25701a3899e4358f893b6a24e1735ea6b3781cbccfefbf45 (image=quay.io/ceph/ceph:v19, name=heuristic_bardeen, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:09:55 compute-0 podman[73836]: 2026-02-02 11:09:55.505197511 +0000 UTC m=+0.099040779 container died 0953bf438e19338c25701a3899e4358f893b6a24e1735ea6b3781cbccfefbf45 (image=quay.io/ceph/ceph:v19, name=heuristic_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb 02 11:09:55 compute-0 podman[73836]: 2026-02-02 11:09:55.427488172 +0000 UTC m=+0.021331440 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:55 compute-0 podman[73836]: 2026-02-02 11:09:55.531260695 +0000 UTC m=+0.125103963 container remove 0953bf438e19338c25701a3899e4358f893b6a24e1735ea6b3781cbccfefbf45 (image=quay.io/ceph/ceph:v19, name=heuristic_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:09:55 compute-0 systemd[1]: libpod-conmon-0953bf438e19338c25701a3899e4358f893b6a24e1735ea6b3781cbccfefbf45.scope: Deactivated successfully.
Feb 02 11:09:55 compute-0 podman[73869]: 2026-02-02 11:09:55.582674583 +0000 UTC m=+0.034747643 container create 0b412e3d9726a9b07096f90ae72f36baadfa5489a661ac379bd21bb34d59bbf6 (image=quay.io/ceph/ceph:v19, name=nice_ishizaka, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:09:55 compute-0 systemd[1]: Started libpod-conmon-0b412e3d9726a9b07096f90ae72f36baadfa5489a661ac379bd21bb34d59bbf6.scope.
Feb 02 11:09:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:09:55 compute-0 podman[73869]: 2026-02-02 11:09:55.628524323 +0000 UTC m=+0.080597443 container init 0b412e3d9726a9b07096f90ae72f36baadfa5489a661ac379bd21bb34d59bbf6 (image=quay.io/ceph/ceph:v19, name=nice_ishizaka, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:09:55 compute-0 podman[73869]: 2026-02-02 11:09:55.633537206 +0000 UTC m=+0.085610266 container start 0b412e3d9726a9b07096f90ae72f36baadfa5489a661ac379bd21bb34d59bbf6 (image=quay.io/ceph/ceph:v19, name=nice_ishizaka, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb 02 11:09:55 compute-0 podman[73869]: 2026-02-02 11:09:55.63719757 +0000 UTC m=+0.089270640 container attach 0b412e3d9726a9b07096f90ae72f36baadfa5489a661ac379bd21bb34d59bbf6 (image=quay.io/ceph/ceph:v19, name=nice_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:09:55 compute-0 nice_ishizaka[73885]: AQADhoBp99jaJhAA2G36uNmkYIEXpGEVA5Iegw==
Feb 02 11:09:55 compute-0 systemd[1]: libpod-0b412e3d9726a9b07096f90ae72f36baadfa5489a661ac379bd21bb34d59bbf6.scope: Deactivated successfully.
Feb 02 11:09:55 compute-0 podman[73869]: 2026-02-02 11:09:55.654050472 +0000 UTC m=+0.106123522 container died 0b412e3d9726a9b07096f90ae72f36baadfa5489a661ac379bd21bb34d59bbf6 (image=quay.io/ceph/ceph:v19, name=nice_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:09:55 compute-0 podman[73869]: 2026-02-02 11:09:55.567314955 +0000 UTC m=+0.019388045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:55 compute-0 podman[73869]: 2026-02-02 11:09:55.685943022 +0000 UTC m=+0.138016082 container remove 0b412e3d9726a9b07096f90ae72f36baadfa5489a661ac379bd21bb34d59bbf6 (image=quay.io/ceph/ceph:v19, name=nice_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:09:55 compute-0 systemd[1]: libpod-conmon-0b412e3d9726a9b07096f90ae72f36baadfa5489a661ac379bd21bb34d59bbf6.scope: Deactivated successfully.
Feb 02 11:09:55 compute-0 podman[73905]: 2026-02-02 11:09:55.744937147 +0000 UTC m=+0.042111734 container create 0aa14b78aca22262c393d10ee053247e6093bf982683e0325c56b3a0e7d2ef12 (image=quay.io/ceph/ceph:v19, name=hungry_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:09:55 compute-0 systemd[1]: Started libpod-conmon-0aa14b78aca22262c393d10ee053247e6093bf982683e0325c56b3a0e7d2ef12.scope.
Feb 02 11:09:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:09:55 compute-0 podman[73905]: 2026-02-02 11:09:55.800470053 +0000 UTC m=+0.097644650 container init 0aa14b78aca22262c393d10ee053247e6093bf982683e0325c56b3a0e7d2ef12 (image=quay.io/ceph/ceph:v19, name=hungry_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:09:55 compute-0 podman[73905]: 2026-02-02 11:09:55.804359854 +0000 UTC m=+0.101534441 container start 0aa14b78aca22262c393d10ee053247e6093bf982683e0325c56b3a0e7d2ef12 (image=quay.io/ceph/ceph:v19, name=hungry_poitras, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:09:55 compute-0 podman[73905]: 2026-02-02 11:09:55.807607507 +0000 UTC m=+0.104782104 container attach 0aa14b78aca22262c393d10ee053247e6093bf982683e0325c56b3a0e7d2ef12 (image=quay.io/ceph/ceph:v19, name=hungry_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Feb 02 11:09:55 compute-0 podman[73905]: 2026-02-02 11:09:55.728903899 +0000 UTC m=+0.026078516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:55 compute-0 hungry_poitras[73922]: AQADhoBp4bIvMRAAbbmv/o+tg8SnlDisl2yMxA==
Feb 02 11:09:55 compute-0 systemd[1]: libpod-0aa14b78aca22262c393d10ee053247e6093bf982683e0325c56b3a0e7d2ef12.scope: Deactivated successfully.
Feb 02 11:09:55 compute-0 podman[73905]: 2026-02-02 11:09:55.828625537 +0000 UTC m=+0.125800124 container died 0aa14b78aca22262c393d10ee053247e6093bf982683e0325c56b3a0e7d2ef12 (image=quay.io/ceph/ceph:v19, name=hungry_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:09:55 compute-0 podman[73905]: 2026-02-02 11:09:55.862163594 +0000 UTC m=+0.159338182 container remove 0aa14b78aca22262c393d10ee053247e6093bf982683e0325c56b3a0e7d2ef12 (image=quay.io/ceph/ceph:v19, name=hungry_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Feb 02 11:09:55 compute-0 systemd[1]: libpod-conmon-0aa14b78aca22262c393d10ee053247e6093bf982683e0325c56b3a0e7d2ef12.scope: Deactivated successfully.
Feb 02 11:09:55 compute-0 podman[73941]: 2026-02-02 11:09:55.909317831 +0000 UTC m=+0.031162761 container create dd89c456488ebaf549de67a9887db8235d9802fad62d8267639954386dde2f9f (image=quay.io/ceph/ceph:v19, name=condescending_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:09:55 compute-0 systemd[1]: Started libpod-conmon-dd89c456488ebaf549de67a9887db8235d9802fad62d8267639954386dde2f9f.scope.
Feb 02 11:09:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:09:55 compute-0 podman[73941]: 2026-02-02 11:09:55.896366391 +0000 UTC m=+0.018211341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:56 compute-0 podman[73941]: 2026-02-02 11:09:56.004577001 +0000 UTC m=+0.126421931 container init dd89c456488ebaf549de67a9887db8235d9802fad62d8267639954386dde2f9f (image=quay.io/ceph/ceph:v19, name=condescending_boyd, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:09:56 compute-0 podman[73941]: 2026-02-02 11:09:56.009879463 +0000 UTC m=+0.131724393 container start dd89c456488ebaf549de67a9887db8235d9802fad62d8267639954386dde2f9f (image=quay.io/ceph/ceph:v19, name=condescending_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:09:56 compute-0 condescending_boyd[73957]: AQAEhoBppY6pARAAFrxkcDwpYY5RNHKXITaEfw==
Feb 02 11:09:56 compute-0 systemd[1]: libpod-dd89c456488ebaf549de67a9887db8235d9802fad62d8267639954386dde2f9f.scope: Deactivated successfully.
Feb 02 11:09:56 compute-0 podman[73941]: 2026-02-02 11:09:56.101010705 +0000 UTC m=+0.222855635 container attach dd89c456488ebaf549de67a9887db8235d9802fad62d8267639954386dde2f9f (image=quay.io/ceph/ceph:v19, name=condescending_boyd, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:09:56 compute-0 podman[73941]: 2026-02-02 11:09:56.101488769 +0000 UTC m=+0.223333699 container died dd89c456488ebaf549de67a9887db8235d9802fad62d8267639954386dde2f9f (image=quay.io/ceph/ceph:v19, name=condescending_boyd, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-31c04d0e839bad0185aea62b84df6bb1f17f81548916ffc54fe5e4227447bd78-merged.mount: Deactivated successfully.
Feb 02 11:09:56 compute-0 podman[73941]: 2026-02-02 11:09:56.779599053 +0000 UTC m=+0.901443983 container remove dd89c456488ebaf549de67a9887db8235d9802fad62d8267639954386dde2f9f (image=quay.io/ceph/ceph:v19, name=condescending_boyd, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb 02 11:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:56 compute-0 systemd[1]: libpod-conmon-dd89c456488ebaf549de67a9887db8235d9802fad62d8267639954386dde2f9f.scope: Deactivated successfully.
Feb 02 11:09:56 compute-0 podman[73978]: 2026-02-02 11:09:56.834648285 +0000 UTC m=+0.038068458 container create 8bab6c7967195b1c538633bcb56c2a4c7884be563247697107019a5e608db915 (image=quay.io/ceph/ceph:v19, name=elastic_diffie, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:09:56 compute-0 systemd[1]: Started libpod-conmon-8bab6c7967195b1c538633bcb56c2a4c7884be563247697107019a5e608db915.scope.
Feb 02 11:09:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:09:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c56335e4ce8877019fc7af73ce1df525dba8d7a843c51c02af31cd6d48b51e3/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:56 compute-0 podman[73978]: 2026-02-02 11:09:56.893937078 +0000 UTC m=+0.097357261 container init 8bab6c7967195b1c538633bcb56c2a4c7884be563247697107019a5e608db915 (image=quay.io/ceph/ceph:v19, name=elastic_diffie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:09:56 compute-0 podman[73978]: 2026-02-02 11:09:56.898093277 +0000 UTC m=+0.101513440 container start 8bab6c7967195b1c538633bcb56c2a4c7884be563247697107019a5e608db915 (image=quay.io/ceph/ceph:v19, name=elastic_diffie, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 11:09:56 compute-0 podman[73978]: 2026-02-02 11:09:56.900902927 +0000 UTC m=+0.104323130 container attach 8bab6c7967195b1c538633bcb56c2a4c7884be563247697107019a5e608db915 (image=quay.io/ceph/ceph:v19, name=elastic_diffie, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:09:56 compute-0 podman[73978]: 2026-02-02 11:09:56.818785062 +0000 UTC m=+0.022205255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:56 compute-0 elastic_diffie[73994]: /usr/bin/monmaptool: monmap file /tmp/monmap
Feb 02 11:09:56 compute-0 elastic_diffie[73994]: setting min_mon_release = quincy
Feb 02 11:09:56 compute-0 elastic_diffie[73994]: /usr/bin/monmaptool: set fsid to 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:09:56 compute-0 elastic_diffie[73994]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Feb 02 11:09:56 compute-0 systemd[1]: libpod-8bab6c7967195b1c538633bcb56c2a4c7884be563247697107019a5e608db915.scope: Deactivated successfully.
Feb 02 11:09:56 compute-0 podman[73978]: 2026-02-02 11:09:56.923879443 +0000 UTC m=+0.127299656 container died 8bab6c7967195b1c538633bcb56c2a4c7884be563247697107019a5e608db915 (image=quay.io/ceph/ceph:v19, name=elastic_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:09:56 compute-0 podman[73978]: 2026-02-02 11:09:56.962349531 +0000 UTC m=+0.165769704 container remove 8bab6c7967195b1c538633bcb56c2a4c7884be563247697107019a5e608db915 (image=quay.io/ceph/ceph:v19, name=elastic_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:09:56 compute-0 systemd[1]: libpod-conmon-8bab6c7967195b1c538633bcb56c2a4c7884be563247697107019a5e608db915.scope: Deactivated successfully.
Feb 02 11:09:57 compute-0 podman[74014]: 2026-02-02 11:09:57.021549332 +0000 UTC m=+0.039536170 container create 286f1deacb383dac66d8236208a0978c7b07493bfc956c08c0eb2bbc8f00c848 (image=quay.io/ceph/ceph:v19, name=inspiring_taussig, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:09:57 compute-0 systemd[1]: Started libpod-conmon-286f1deacb383dac66d8236208a0978c7b07493bfc956c08c0eb2bbc8f00c848.scope.
Feb 02 11:09:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11e10757e0c9fdac0fd35448eef6079e6e47d2718954bebf268268119c9a49a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11e10757e0c9fdac0fd35448eef6079e6e47d2718954bebf268268119c9a49a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11e10757e0c9fdac0fd35448eef6079e6e47d2718954bebf268268119c9a49a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11e10757e0c9fdac0fd35448eef6079e6e47d2718954bebf268268119c9a49a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:57 compute-0 podman[74014]: 2026-02-02 11:09:57.095108162 +0000 UTC m=+0.113095010 container init 286f1deacb383dac66d8236208a0978c7b07493bfc956c08c0eb2bbc8f00c848 (image=quay.io/ceph/ceph:v19, name=inspiring_taussig, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:09:57 compute-0 podman[74014]: 2026-02-02 11:09:57.00153127 +0000 UTC m=+0.019518148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:57 compute-0 podman[74014]: 2026-02-02 11:09:57.102784032 +0000 UTC m=+0.120770860 container start 286f1deacb383dac66d8236208a0978c7b07493bfc956c08c0eb2bbc8f00c848 (image=quay.io/ceph/ceph:v19, name=inspiring_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 11:09:57 compute-0 podman[74014]: 2026-02-02 11:09:57.105795768 +0000 UTC m=+0.123782626 container attach 286f1deacb383dac66d8236208a0978c7b07493bfc956c08c0eb2bbc8f00c848 (image=quay.io/ceph/ceph:v19, name=inspiring_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:09:57 compute-0 systemd[1]: libpod-286f1deacb383dac66d8236208a0978c7b07493bfc956c08c0eb2bbc8f00c848.scope: Deactivated successfully.
Feb 02 11:09:57 compute-0 podman[74014]: 2026-02-02 11:09:57.178839783 +0000 UTC m=+0.196826611 container died 286f1deacb383dac66d8236208a0978c7b07493bfc956c08c0eb2bbc8f00c848 (image=quay.io/ceph/ceph:v19, name=inspiring_taussig, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:09:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11e10757e0c9fdac0fd35448eef6079e6e47d2718954bebf268268119c9a49a-merged.mount: Deactivated successfully.
Feb 02 11:09:57 compute-0 podman[74014]: 2026-02-02 11:09:57.208907922 +0000 UTC m=+0.226894750 container remove 286f1deacb383dac66d8236208a0978c7b07493bfc956c08c0eb2bbc8f00c848 (image=quay.io/ceph/ceph:v19, name=inspiring_taussig, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:09:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:57 compute-0 systemd[1]: libpod-conmon-286f1deacb383dac66d8236208a0978c7b07493bfc956c08c0eb2bbc8f00c848.scope: Deactivated successfully.
Feb 02 11:09:57 compute-0 systemd[1]: Reloading.
Feb 02 11:09:57 compute-0 systemd-rc-local-generator[74095]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:09:57 compute-0 systemd-sysv-generator[74098]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:09:57 compute-0 systemd[1]: Reloading.
Feb 02 11:09:57 compute-0 systemd-rc-local-generator[74129]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:09:57 compute-0 systemd-sysv-generator[74134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:09:57 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Feb 02 11:09:57 compute-0 systemd[1]: Reloading.
Feb 02 11:09:57 compute-0 systemd-sysv-generator[74173]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:09:57 compute-0 systemd-rc-local-generator[74170]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:09:57 compute-0 systemd[1]: Reached target Ceph cluster 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:09:57 compute-0 systemd[1]: Reloading.
Feb 02 11:09:57 compute-0 systemd-rc-local-generator[74208]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:09:57 compute-0 systemd-sysv-generator[74212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:09:58 compute-0 systemd[1]: Reloading.
Feb 02 11:09:58 compute-0 systemd-rc-local-generator[74247]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:09:58 compute-0 systemd-sysv-generator[74254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:09:58 compute-0 systemd[1]: Created slice Slice /system/ceph-1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:09:58 compute-0 systemd[1]: Reached target System Time Set.
Feb 02 11:09:58 compute-0 systemd[1]: Reached target System Time Synchronized.
Feb 02 11:09:58 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:09:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:09:58 compute-0 podman[74307]: 2026-02-02 11:09:58.483263083 +0000 UTC m=+0.038014147 container create f57f91ccb7edf69b119fdbdfc7164ae922f2f74e9422278d23a26fcbeac06137 (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b119d726a6a5c57dd2a69316bf6efe8b8d277af52c4bb169e949af19953c6be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b119d726a6a5c57dd2a69316bf6efe8b8d277af52c4bb169e949af19953c6be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b119d726a6a5c57dd2a69316bf6efe8b8d277af52c4bb169e949af19953c6be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b119d726a6a5c57dd2a69316bf6efe8b8d277af52c4bb169e949af19953c6be/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:58 compute-0 podman[74307]: 2026-02-02 11:09:58.534224748 +0000 UTC m=+0.088975832 container init f57f91ccb7edf69b119fdbdfc7164ae922f2f74e9422278d23a26fcbeac06137 (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:09:58 compute-0 podman[74307]: 2026-02-02 11:09:58.538452989 +0000 UTC m=+0.093204053 container start f57f91ccb7edf69b119fdbdfc7164ae922f2f74e9422278d23a26fcbeac06137 (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 11:09:58 compute-0 bash[74307]: f57f91ccb7edf69b119fdbdfc7164ae922f2f74e9422278d23a26fcbeac06137
Feb 02 11:09:58 compute-0 podman[74307]: 2026-02-02 11:09:58.466400031 +0000 UTC m=+0.021151115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:58 compute-0 systemd[1]: Started Ceph mon.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:09:58 compute-0 ceph-mon[74327]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 11:09:58 compute-0 ceph-mon[74327]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Feb 02 11:09:58 compute-0 ceph-mon[74327]: pidfile_write: ignore empty --pid-file
Feb 02 11:09:58 compute-0 ceph-mon[74327]: load: jerasure load: lrc 
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: RocksDB version: 7.9.2
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Git sha 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Compile date 2025-07-17 03:12:14
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: DB SUMMARY
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: DB Session ID:  6PXTIQMOTI9PPQMSB32R
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: CURRENT file:  CURRENT
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                         Options.error_if_exists: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                       Options.create_if_missing: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                                     Options.env: 0x55977f690c20
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                                Options.info_log: 0x559781876d60
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                              Options.statistics: (nil)
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                               Options.use_fsync: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                              Options.db_log_dir: 
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                                 Options.wal_dir: 
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                    Options.write_buffer_manager: 0x55978187b900
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                  Options.unordered_write: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                               Options.row_cache: None
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                              Options.wal_filter: None
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.two_write_queues: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.wal_compression: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.atomic_flush: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.max_background_jobs: 2
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.max_background_compactions: -1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.max_subcompactions: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.max_total_wal_size: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                          Options.max_open_files: -1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:       Options.compaction_readahead_size: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Compression algorithms supported:
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         kZSTD supported: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         kXpressCompression supported: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         kBZip2Compression supported: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         kLZ4Compression supported: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         kZlibCompression supported: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         kSnappyCompression supported: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:           Options.merge_operator: 
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:        Options.compaction_filter: None
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559781876500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55978189b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:        Options.write_buffer_size: 33554432
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:  Options.max_write_buffer_number: 2
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:          Options.compression: NoCompression
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.num_levels: 7
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8d3f2a2d-cae1-4d7e-a420-44d61e6b143d
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030598577216, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030598578834, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "6PXTIQMOTI9PPQMSB32R", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030598578938, "job": 1, "event": "recovery_finished"}
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55978189ce00
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: DB pointer 0x5597819a6000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:09:58 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55978189b350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 11:09:58 compute-0 ceph-mon[74327]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@-1(???) e0 preinit fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(probing) e0 win_standalone_election
Feb 02 11:09:58 compute-0 ceph-mon[74327]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(probing) e1 win_standalone_election
Feb 02 11:09:58 compute-0 ceph-mon[74327]: paxos.0).electionLogic(2) init, last seen epoch 2
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T11:09:56.920509+0000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : created 2026-02-02T11:09:56.920509+0000
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,os=Linux}
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:09:58 compute-0 podman[74328]: 2026-02-02 11:09:58.612681628 +0000 UTC m=+0.045068768 container create 9fd6e018bbb91617b9303903d2f58b3fbd7eecd076b91d40d2ffd9c13f743b98 (image=quay.io/ceph/ceph:v19, name=jovial_poitras, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e1 new map
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-02-02T11:09:58:610843+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap 
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mkfs 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 11:09:58 compute-0 systemd[1]: Started libpod-conmon-9fd6e018bbb91617b9303903d2f58b3fbd7eecd076b91d40d2ffd9c13f743b98.scope.
Feb 02 11:09:58 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7620ae269b47b86ea1b68d0030a5688a5560db03946f994d712607d5d2c1dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7620ae269b47b86ea1b68d0030a5688a5560db03946f994d712607d5d2c1dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7620ae269b47b86ea1b68d0030a5688a5560db03946f994d712607d5d2c1dc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:58 compute-0 podman[74328]: 2026-02-02 11:09:58.68626613 +0000 UTC m=+0.118653290 container init 9fd6e018bbb91617b9303903d2f58b3fbd7eecd076b91d40d2ffd9c13f743b98 (image=quay.io/ceph/ceph:v19, name=jovial_poitras, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:09:58 compute-0 podman[74328]: 2026-02-02 11:09:58.591524714 +0000 UTC m=+0.023911864 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:58 compute-0 podman[74328]: 2026-02-02 11:09:58.69187799 +0000 UTC m=+0.124265130 container start 9fd6e018bbb91617b9303903d2f58b3fbd7eecd076b91d40d2ffd9c13f743b98 (image=quay.io/ceph/ceph:v19, name=jovial_poitras, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:09:58 compute-0 podman[74328]: 2026-02-02 11:09:58.695182144 +0000 UTC m=+0.127569294 container attach 9fd6e018bbb91617b9303903d2f58b3fbd7eecd076b91d40d2ffd9c13f743b98 (image=quay.io/ceph/ceph:v19, name=jovial_poitras, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:09:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 02 11:09:58 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1023083298' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:   cluster:
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:     id:     1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:     health: HEALTH_OK
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:  
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:   services:
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:     mon: 1 daemons, quorum compute-0 (age 0.268607s)
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:     mgr: no daemons active
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:     osd: 0 osds: 0 up, 0 in
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:  
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:   data:
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:     pools:   0 pools, 0 pgs
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:     objects: 0 objects, 0 B
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:     usage:   0 B used, 0 B / 0 B avail
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:     pgs:     
Feb 02 11:09:58 compute-0 jovial_poitras[74382]:  
Feb 02 11:09:58 compute-0 systemd[1]: libpod-9fd6e018bbb91617b9303903d2f58b3fbd7eecd076b91d40d2ffd9c13f743b98.scope: Deactivated successfully.
Feb 02 11:09:58 compute-0 podman[74328]: 2026-02-02 11:09:58.896431341 +0000 UTC m=+0.328818501 container died 9fd6e018bbb91617b9303903d2f58b3fbd7eecd076b91d40d2ffd9c13f743b98 (image=quay.io/ceph/ceph:v19, name=jovial_poitras, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:09:58 compute-0 podman[74328]: 2026-02-02 11:09:58.924918825 +0000 UTC m=+0.357305965 container remove 9fd6e018bbb91617b9303903d2f58b3fbd7eecd076b91d40d2ffd9c13f743b98 (image=quay.io/ceph/ceph:v19, name=jovial_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:09:58 compute-0 systemd[1]: libpod-conmon-9fd6e018bbb91617b9303903d2f58b3fbd7eecd076b91d40d2ffd9c13f743b98.scope: Deactivated successfully.
Feb 02 11:09:58 compute-0 podman[74420]: 2026-02-02 11:09:58.973377267 +0000 UTC m=+0.032125907 container create 4e8f1744a14a90f7d17ab0c282dc46c64d80a556c2086e0e790cf6412df81d32 (image=quay.io/ceph/ceph:v19, name=sweet_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:09:59 compute-0 systemd[1]: Started libpod-conmon-4e8f1744a14a90f7d17ab0c282dc46c64d80a556c2086e0e790cf6412df81d32.scope.
Feb 02 11:09:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f468aa19531bde48a90b39cf2f303c5a550b05966e10cebe7e655677ef31d9c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f468aa19531bde48a90b39cf2f303c5a550b05966e10cebe7e655677ef31d9c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f468aa19531bde48a90b39cf2f303c5a550b05966e10cebe7e655677ef31d9c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f468aa19531bde48a90b39cf2f303c5a550b05966e10cebe7e655677ef31d9c8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:59 compute-0 podman[74420]: 2026-02-02 11:09:59.031412905 +0000 UTC m=+0.090161565 container init 4e8f1744a14a90f7d17ab0c282dc46c64d80a556c2086e0e790cf6412df81d32 (image=quay.io/ceph/ceph:v19, name=sweet_austin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:09:59 compute-0 podman[74420]: 2026-02-02 11:09:59.035978085 +0000 UTC m=+0.094726725 container start 4e8f1744a14a90f7d17ab0c282dc46c64d80a556c2086e0e790cf6412df81d32 (image=quay.io/ceph/ceph:v19, name=sweet_austin, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:09:59 compute-0 podman[74420]: 2026-02-02 11:09:59.03929294 +0000 UTC m=+0.098041610 container attach 4e8f1744a14a90f7d17ab0c282dc46c64d80a556c2086e0e790cf6412df81d32 (image=quay.io/ceph/ceph:v19, name=sweet_austin, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:09:59 compute-0 podman[74420]: 2026-02-02 11:09:58.959447831 +0000 UTC m=+0.018196491 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 02 11:09:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2432169405' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb 02 11:09:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2432169405' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 02 11:09:59 compute-0 sweet_austin[74437]: 
Feb 02 11:09:59 compute-0 sweet_austin[74437]: [global]
Feb 02 11:09:59 compute-0 sweet_austin[74437]:         fsid = 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:09:59 compute-0 sweet_austin[74437]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb 02 11:09:59 compute-0 systemd[1]: libpod-4e8f1744a14a90f7d17ab0c282dc46c64d80a556c2086e0e790cf6412df81d32.scope: Deactivated successfully.
Feb 02 11:09:59 compute-0 conmon[74437]: conmon 4e8f1744a14a90f7d17a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e8f1744a14a90f7d17ab0c282dc46c64d80a556c2086e0e790cf6412df81d32.scope/container/memory.events
Feb 02 11:09:59 compute-0 podman[74420]: 2026-02-02 11:09:59.256211544 +0000 UTC m=+0.314960204 container died 4e8f1744a14a90f7d17ab0c282dc46c64d80a556c2086e0e790cf6412df81d32 (image=quay.io/ceph/ceph:v19, name=sweet_austin, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:09:59 compute-0 podman[74420]: 2026-02-02 11:09:59.291329107 +0000 UTC m=+0.350077747 container remove 4e8f1744a14a90f7d17ab0c282dc46c64d80a556c2086e0e790cf6412df81d32 (image=quay.io/ceph/ceph:v19, name=sweet_austin, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:09:59 compute-0 systemd[1]: libpod-conmon-4e8f1744a14a90f7d17ab0c282dc46c64d80a556c2086e0e790cf6412df81d32.scope: Deactivated successfully.
Feb 02 11:09:59 compute-0 podman[74475]: 2026-02-02 11:09:59.351253128 +0000 UTC m=+0.041844046 container create 6d25501e5c79f436914198d5fc006bcf9e10afd212ab773d4d9f37d1acf540ba (image=quay.io/ceph/ceph:v19, name=pedantic_herschel, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:09:59 compute-0 systemd[1]: Started libpod-conmon-6d25501e5c79f436914198d5fc006bcf9e10afd212ab773d4d9f37d1acf540ba.scope.
Feb 02 11:09:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ccd7c65c5be9dc3ab464ed5b572d409f920ba2c9177877be8c7c6a9a6441c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ccd7c65c5be9dc3ab464ed5b572d409f920ba2c9177877be8c7c6a9a6441c9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ccd7c65c5be9dc3ab464ed5b572d409f920ba2c9177877be8c7c6a9a6441c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ccd7c65c5be9dc3ab464ed5b572d409f920ba2c9177877be8c7c6a9a6441c9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 11:09:59 compute-0 podman[74475]: 2026-02-02 11:09:59.333687376 +0000 UTC m=+0.024278324 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:09:59 compute-0 podman[74475]: 2026-02-02 11:09:59.431821689 +0000 UTC m=+0.122412627 container init 6d25501e5c79f436914198d5fc006bcf9e10afd212ab773d4d9f37d1acf540ba (image=quay.io/ceph/ceph:v19, name=pedantic_herschel, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:09:59 compute-0 podman[74475]: 2026-02-02 11:09:59.435800382 +0000 UTC m=+0.126391300 container start 6d25501e5c79f436914198d5fc006bcf9e10afd212ab773d4d9f37d1acf540ba (image=quay.io/ceph/ceph:v19, name=pedantic_herschel, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:09:59 compute-0 podman[74475]: 2026-02-02 11:09:59.438847149 +0000 UTC m=+0.129438057 container attach 6d25501e5c79f436914198d5fc006bcf9e10afd212ab773d4d9f37d1acf540ba (image=quay.io/ceph/ceph:v19, name=pedantic_herschel, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 11:09:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:09:59 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375639857' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:09:59 compute-0 ceph-mon[74327]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 11:09:59 compute-0 ceph-mon[74327]: monmap epoch 1
Feb 02 11:09:59 compute-0 ceph-mon[74327]: fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:09:59 compute-0 ceph-mon[74327]: last_changed 2026-02-02T11:09:56.920509+0000
Feb 02 11:09:59 compute-0 ceph-mon[74327]: created 2026-02-02T11:09:56.920509+0000
Feb 02 11:09:59 compute-0 ceph-mon[74327]: min_mon_release 19 (squid)
Feb 02 11:09:59 compute-0 ceph-mon[74327]: election_strategy: 1
Feb 02 11:09:59 compute-0 ceph-mon[74327]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 11:09:59 compute-0 ceph-mon[74327]: fsmap 
Feb 02 11:09:59 compute-0 ceph-mon[74327]: osdmap e1: 0 total, 0 up, 0 in
Feb 02 11:09:59 compute-0 ceph-mon[74327]: mgrmap e1: no daemons active
Feb 02 11:09:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1023083298' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:09:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2432169405' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb 02 11:09:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2432169405' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 02 11:09:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2375639857' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:09:59 compute-0 systemd[1]: libpod-6d25501e5c79f436914198d5fc006bcf9e10afd212ab773d4d9f37d1acf540ba.scope: Deactivated successfully.
Feb 02 11:09:59 compute-0 podman[74475]: 2026-02-02 11:09:59.6391657 +0000 UTC m=+0.329756618 container died 6d25501e5c79f436914198d5fc006bcf9e10afd212ab773d4d9f37d1acf540ba (image=quay.io/ceph/ceph:v19, name=pedantic_herschel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:09:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-12ccd7c65c5be9dc3ab464ed5b572d409f920ba2c9177877be8c7c6a9a6441c9-merged.mount: Deactivated successfully.
Feb 02 11:09:59 compute-0 podman[74475]: 2026-02-02 11:09:59.671867774 +0000 UTC m=+0.362458692 container remove 6d25501e5c79f436914198d5fc006bcf9e10afd212ab773d4d9f37d1acf540ba (image=quay.io/ceph/ceph:v19, name=pedantic_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 11:09:59 compute-0 systemd[1]: libpod-conmon-6d25501e5c79f436914198d5fc006bcf9e10afd212ab773d4d9f37d1acf540ba.scope: Deactivated successfully.
Feb 02 11:09:59 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:09:59 compute-0 ceph-mon[74327]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb 02 11:09:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb 02 11:09:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 shutdown
Feb 02 11:09:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0[74323]: 2026-02-02T11:09:59.832+0000 7fb854fd1640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb 02 11:09:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0[74323]: 2026-02-02T11:09:59.832+0000 7fb854fd1640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb 02 11:09:59 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 02 11:09:59 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 02 11:10:00 compute-0 podman[74558]: 2026-02-02 11:10:00.047084748 +0000 UTC m=+0.252419989 container died f57f91ccb7edf69b119fdbdfc7164ae922f2f74e9422278d23a26fcbeac06137 (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b119d726a6a5c57dd2a69316bf6efe8b8d277af52c4bb169e949af19953c6be-merged.mount: Deactivated successfully.
Feb 02 11:10:00 compute-0 podman[74558]: 2026-02-02 11:10:00.078903217 +0000 UTC m=+0.284238448 container remove f57f91ccb7edf69b119fdbdfc7164ae922f2f74e9422278d23a26fcbeac06137 (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:00 compute-0 bash[74558]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0
Feb 02 11:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 11:10:00 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@mon.compute-0.service: Deactivated successfully.
Feb 02 11:10:00 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:10:00 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:10:00 compute-0 podman[74657]: 2026-02-02 11:10:00.330059719 +0000 UTC m=+0.032614083 container create 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d76a75509f2fe1432b5bd262d4aadc1e71c6065ace657ea95fed9eeee6d03e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d76a75509f2fe1432b5bd262d4aadc1e71c6065ace657ea95fed9eeee6d03e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d76a75509f2fe1432b5bd262d4aadc1e71c6065ace657ea95fed9eeee6d03e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d76a75509f2fe1432b5bd262d4aadc1e71c6065ace657ea95fed9eeee6d03e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:00 compute-0 podman[74657]: 2026-02-02 11:10:00.379519121 +0000 UTC m=+0.082073485 container init 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:00 compute-0 podman[74657]: 2026-02-02 11:10:00.385359488 +0000 UTC m=+0.087913852 container start 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb 02 11:10:00 compute-0 bash[74657]: 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf
Feb 02 11:10:00 compute-0 podman[74657]: 2026-02-02 11:10:00.314780362 +0000 UTC m=+0.017334756 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:00 compute-0 systemd[1]: Started Ceph mon.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:10:00 compute-0 ceph-mon[74676]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 11:10:00 compute-0 ceph-mon[74676]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Feb 02 11:10:00 compute-0 ceph-mon[74676]: pidfile_write: ignore empty --pid-file
Feb 02 11:10:00 compute-0 ceph-mon[74676]: load: jerasure load: lrc 
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: RocksDB version: 7.9.2
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Git sha 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Compile date 2025-07-17 03:12:14
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: DB SUMMARY
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: DB Session ID:  2U6BFZW95GLJ0BZKEBVK
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: CURRENT file:  CURRENT
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 59859 ; 
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                         Options.error_if_exists: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                       Options.create_if_missing: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                                     Options.env: 0x5594e09e0c20
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                                Options.info_log: 0x5594e3027ac0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                              Options.statistics: (nil)
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                               Options.use_fsync: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                              Options.db_log_dir: 
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                                 Options.wal_dir: 
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                    Options.write_buffer_manager: 0x5594e302b900
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                  Options.unordered_write: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                               Options.row_cache: None
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                              Options.wal_filter: None
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.two_write_queues: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.wal_compression: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.atomic_flush: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.max_background_jobs: 2
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.max_background_compactions: -1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.max_subcompactions: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.max_total_wal_size: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                          Options.max_open_files: -1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:       Options.compaction_readahead_size: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Compression algorithms supported:
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         kZSTD supported: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         kXpressCompression supported: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         kBZip2Compression supported: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         kLZ4Compression supported: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         kZlibCompression supported: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         kSnappyCompression supported: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:           Options.merge_operator: 
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:        Options.compaction_filter: None
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594e3026aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594e304b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:        Options.write_buffer_size: 33554432
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:  Options.max_write_buffer_number: 2
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:          Options.compression: NoCompression
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.num_levels: 7
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8d3f2a2d-cae1-4d7e-a420-44d61e6b143d
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030600428385, "job": 1, "event": "recovery_started", "wal_files": [9]}
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030600432241, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 58095, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3209, "raw_average_key_size": 30, "raw_value_size": 55578, "raw_average_value_size": 529, "num_data_blocks": 9, "num_entries": 105, "num_filter_entries": 105, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030600, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030600432358, "job": 1, "event": "recovery_finished"}
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5594e304ce00
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: DB pointer 0x5594e3156000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:10:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.13 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   60.13 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.55 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.55 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594e304b350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 11:10:00 compute-0 ceph-mon[74676]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@-1(???) e1 preinit fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@-1(???).mds e1 new map
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-02-02T11:09:58:610843+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@0(probing) e1 win_standalone_election
Feb 02 11:10:00 compute-0 ceph-mon[74676]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T11:09:56.920509+0000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : created 2026-02-02T11:09:56.920509+0000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap 
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb 02 11:10:00 compute-0 podman[74677]: 2026-02-02 11:10:00.452833855 +0000 UTC m=+0.042548716 container create e0e15549513f07f9e98b1b9bb9265f398fa3aeeda87bc198ad50d803d13a65ee (image=quay.io/ceph/ceph:v19, name=optimistic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb 02 11:10:00 compute-0 systemd[1]: Started libpod-conmon-e0e15549513f07f9e98b1b9bb9265f398fa3aeeda87bc198ad50d803d13a65ee.scope.
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 11:10:00 compute-0 ceph-mon[74676]: monmap epoch 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:00 compute-0 ceph-mon[74676]: last_changed 2026-02-02T11:09:56.920509+0000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: created 2026-02-02T11:09:56.920509+0000
Feb 02 11:10:00 compute-0 ceph-mon[74676]: min_mon_release 19 (squid)
Feb 02 11:10:00 compute-0 ceph-mon[74676]: election_strategy: 1
Feb 02 11:10:00 compute-0 ceph-mon[74676]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 11:10:00 compute-0 ceph-mon[74676]: fsmap 
Feb 02 11:10:00 compute-0 ceph-mon[74676]: osdmap e1: 0 total, 0 up, 0 in
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mgrmap e1: no daemons active
Feb 02 11:10:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d990c27620cca540f543dc1abce4d26af37fd9ad34aa03b0ccea7af6d62350/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d990c27620cca540f543dc1abce4d26af37fd9ad34aa03b0ccea7af6d62350/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d990c27620cca540f543dc1abce4d26af37fd9ad34aa03b0ccea7af6d62350/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:00 compute-0 podman[74677]: 2026-02-02 11:10:00.436010554 +0000 UTC m=+0.025725405 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:00 compute-0 podman[74677]: 2026-02-02 11:10:00.534572459 +0000 UTC m=+0.124287310 container init e0e15549513f07f9e98b1b9bb9265f398fa3aeeda87bc198ad50d803d13a65ee (image=quay.io/ceph/ceph:v19, name=optimistic_carson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:00 compute-0 podman[74677]: 2026-02-02 11:10:00.53883005 +0000 UTC m=+0.128544881 container start e0e15549513f07f9e98b1b9bb9265f398fa3aeeda87bc198ad50d803d13a65ee (image=quay.io/ceph/ceph:v19, name=optimistic_carson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:10:00 compute-0 podman[74677]: 2026-02-02 11:10:00.542174636 +0000 UTC m=+0.131889467 container attach e0e15549513f07f9e98b1b9bb9265f398fa3aeeda87bc198ad50d803d13a65ee (image=quay.io/ceph/ceph:v19, name=optimistic_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:10:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Feb 02 11:10:00 compute-0 systemd[1]: libpod-e0e15549513f07f9e98b1b9bb9265f398fa3aeeda87bc198ad50d803d13a65ee.scope: Deactivated successfully.
Feb 02 11:10:00 compute-0 podman[74677]: 2026-02-02 11:10:00.727074666 +0000 UTC m=+0.316789517 container died e0e15549513f07f9e98b1b9bb9265f398fa3aeeda87bc198ad50d803d13a65ee (image=quay.io/ceph/ceph:v19, name=optimistic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-67d990c27620cca540f543dc1abce4d26af37fd9ad34aa03b0ccea7af6d62350-merged.mount: Deactivated successfully.
Feb 02 11:10:00 compute-0 podman[74677]: 2026-02-02 11:10:00.755848458 +0000 UTC m=+0.345563289 container remove e0e15549513f07f9e98b1b9bb9265f398fa3aeeda87bc198ad50d803d13a65ee (image=quay.io/ceph/ceph:v19, name=optimistic_carson, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb 02 11:10:00 compute-0 systemd[1]: libpod-conmon-e0e15549513f07f9e98b1b9bb9265f398fa3aeeda87bc198ad50d803d13a65ee.scope: Deactivated successfully.
Feb 02 11:10:00 compute-0 podman[74770]: 2026-02-02 11:10:00.809037866 +0000 UTC m=+0.035756962 container create fc27063d7696c29cfa90837e0cfcb3e777bc67894af77e7c6b4a45317e313f75 (image=quay.io/ceph/ceph:v19, name=blissful_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:00 compute-0 systemd[1]: Started libpod-conmon-fc27063d7696c29cfa90837e0cfcb3e777bc67894af77e7c6b4a45317e313f75.scope.
Feb 02 11:10:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4018c8075c3272989ea1015afcf2d8a861368af8b0ffda370bcd77767121c0a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4018c8075c3272989ea1015afcf2d8a861368af8b0ffda370bcd77767121c0a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4018c8075c3272989ea1015afcf2d8a861368af8b0ffda370bcd77767121c0a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:00 compute-0 podman[74770]: 2026-02-02 11:10:00.870702167 +0000 UTC m=+0.097421273 container init fc27063d7696c29cfa90837e0cfcb3e777bc67894af77e7c6b4a45317e313f75 (image=quay.io/ceph/ceph:v19, name=blissful_chebyshev, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:10:00 compute-0 podman[74770]: 2026-02-02 11:10:00.874072814 +0000 UTC m=+0.100791910 container start fc27063d7696c29cfa90837e0cfcb3e777bc67894af77e7c6b4a45317e313f75 (image=quay.io/ceph/ceph:v19, name=blissful_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:10:00 compute-0 podman[74770]: 2026-02-02 11:10:00.877023978 +0000 UTC m=+0.103743084 container attach fc27063d7696c29cfa90837e0cfcb3e777bc67894af77e7c6b4a45317e313f75 (image=quay.io/ceph/ceph:v19, name=blissful_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:10:00 compute-0 podman[74770]: 2026-02-02 11:10:00.79410975 +0000 UTC m=+0.020828866 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Feb 02 11:10:01 compute-0 systemd[1]: libpod-fc27063d7696c29cfa90837e0cfcb3e777bc67894af77e7c6b4a45317e313f75.scope: Deactivated successfully.
Feb 02 11:10:01 compute-0 podman[74770]: 2026-02-02 11:10:01.059306203 +0000 UTC m=+0.286025339 container died fc27063d7696c29cfa90837e0cfcb3e777bc67894af77e7c6b4a45317e313f75 (image=quay.io/ceph/ceph:v19, name=blissful_chebyshev, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Feb 02 11:10:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4018c8075c3272989ea1015afcf2d8a861368af8b0ffda370bcd77767121c0a2-merged.mount: Deactivated successfully.
Feb 02 11:10:01 compute-0 podman[74770]: 2026-02-02 11:10:01.095039684 +0000 UTC m=+0.321758790 container remove fc27063d7696c29cfa90837e0cfcb3e777bc67894af77e7c6b4a45317e313f75 (image=quay.io/ceph/ceph:v19, name=blissful_chebyshev, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:01 compute-0 systemd[1]: libpod-conmon-fc27063d7696c29cfa90837e0cfcb3e777bc67894af77e7c6b4a45317e313f75.scope: Deactivated successfully.
Feb 02 11:10:01 compute-0 systemd[1]: Reloading.
Feb 02 11:10:01 compute-0 systemd-rc-local-generator[74853]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:10:01 compute-0 systemd-sysv-generator[74857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:10:01 compute-0 systemd[1]: Reloading.
Feb 02 11:10:01 compute-0 systemd-rc-local-generator[74892]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:10:01 compute-0 systemd-sysv-generator[74895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:10:01 compute-0 systemd[1]: Starting Ceph mgr.compute-0.dhyzzj for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:10:01 compute-0 podman[74949]: 2026-02-02 11:10:01.765375546 +0000 UTC m=+0.034256090 container create d3aa79f70c717fb0a814104212602b6c2fbce473c3af1033809d9d70786d44b6 (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce875e3085e61955168d2aecf43a0986617f304182f3213078caba1c539e244/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce875e3085e61955168d2aecf43a0986617f304182f3213078caba1c539e244/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce875e3085e61955168d2aecf43a0986617f304182f3213078caba1c539e244/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce875e3085e61955168d2aecf43a0986617f304182f3213078caba1c539e244/merged/var/lib/ceph/mgr/ceph-compute-0.dhyzzj supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:01 compute-0 podman[74949]: 2026-02-02 11:10:01.818189964 +0000 UTC m=+0.087070298 container init d3aa79f70c717fb0a814104212602b6c2fbce473c3af1033809d9d70786d44b6 (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Feb 02 11:10:01 compute-0 podman[74949]: 2026-02-02 11:10:01.821968362 +0000 UTC m=+0.090848696 container start d3aa79f70c717fb0a814104212602b6c2fbce473c3af1033809d9d70786d44b6 (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:01 compute-0 bash[74949]: d3aa79f70c717fb0a814104212602b6c2fbce473c3af1033809d9d70786d44b6
Feb 02 11:10:01 compute-0 podman[74949]: 2026-02-02 11:10:01.749474152 +0000 UTC m=+0.018354506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:01 compute-0 systemd[1]: Started Ceph mgr.compute-0.dhyzzj for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:10:01 compute-0 ceph-mgr[74969]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 11:10:01 compute-0 ceph-mgr[74969]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb 02 11:10:01 compute-0 ceph-mgr[74969]: pidfile_write: ignore empty --pid-file
Feb 02 11:10:01 compute-0 podman[74970]: 2026-02-02 11:10:01.891864768 +0000 UTC m=+0.040079696 container create 46ef5ba07722b8420fad876984baf245b797dcad39d47ef13ef4cf03f2968d1d (image=quay.io/ceph/ceph:v19, name=blissful_wu, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb 02 11:10:01 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'alerts'
Feb 02 11:10:01 compute-0 systemd[1]: Started libpod-conmon-46ef5ba07722b8420fad876984baf245b797dcad39d47ef13ef4cf03f2968d1d.scope.
Feb 02 11:10:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194467afcc47b50e5aa26202d3b0f1d0e04be809d31d185a812173b60146e4b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194467afcc47b50e5aa26202d3b0f1d0e04be809d31d185a812173b60146e4b4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194467afcc47b50e5aa26202d3b0f1d0e04be809d31d185a812173b60146e4b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:01 compute-0 podman[74970]: 2026-02-02 11:10:01.876652033 +0000 UTC m=+0.024866981 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:01 compute-0 podman[74970]: 2026-02-02 11:10:01.97738301 +0000 UTC m=+0.125598028 container init 46ef5ba07722b8420fad876984baf245b797dcad39d47ef13ef4cf03f2968d1d (image=quay.io/ceph/ceph:v19, name=blissful_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:10:01 compute-0 podman[74970]: 2026-02-02 11:10:01.984347489 +0000 UTC m=+0.132562437 container start 46ef5ba07722b8420fad876984baf245b797dcad39d47ef13ef4cf03f2968d1d (image=quay.io/ceph/ceph:v19, name=blissful_wu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 11:10:01 compute-0 podman[74970]: 2026-02-02 11:10:01.987641403 +0000 UTC m=+0.135856381 container attach 46ef5ba07722b8420fad876984baf245b797dcad39d47ef13ef4cf03f2968d1d (image=quay.io/ceph/ceph:v19, name=blissful_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:02.018+0000 7fa879941140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:10:02 compute-0 ceph-mgr[74969]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:10:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'balancer'
Feb 02 11:10:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:02.107+0000 7fa879941140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:10:02 compute-0 ceph-mgr[74969]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:10:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'cephadm'
Feb 02 11:10:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 02 11:10:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548172529' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:10:02 compute-0 blissful_wu[75007]: 
Feb 02 11:10:02 compute-0 blissful_wu[75007]: {
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "health": {
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "status": "HEALTH_OK",
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "checks": {},
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "mutes": []
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     },
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "election_epoch": 5,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "quorum": [
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         0
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     ],
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "quorum_names": [
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "compute-0"
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     ],
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "quorum_age": 1,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "monmap": {
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "epoch": 1,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "min_mon_release_name": "squid",
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "num_mons": 1
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     },
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "osdmap": {
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "epoch": 1,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "num_osds": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "num_up_osds": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "osd_up_since": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "num_in_osds": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "osd_in_since": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "num_remapped_pgs": 0
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     },
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "pgmap": {
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "pgs_by_state": [],
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "num_pgs": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "num_pools": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "num_objects": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "data_bytes": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "bytes_used": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "bytes_avail": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "bytes_total": 0
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     },
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "fsmap": {
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "epoch": 1,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "btime": "2026-02-02T11:09:58:610843+0000",
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "by_rank": [],
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "up:standby": 0
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     },
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "mgrmap": {
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "available": false,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "num_standbys": 0,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "modules": [
Feb 02 11:10:02 compute-0 blissful_wu[75007]:             "iostat",
Feb 02 11:10:02 compute-0 blissful_wu[75007]:             "nfs",
Feb 02 11:10:02 compute-0 blissful_wu[75007]:             "restful"
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         ],
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "services": {}
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     },
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "servicemap": {
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "epoch": 1,
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "modified": "2026-02-02T11:09:58.613481+0000",
Feb 02 11:10:02 compute-0 blissful_wu[75007]:         "services": {}
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     },
Feb 02 11:10:02 compute-0 blissful_wu[75007]:     "progress_events": {}
Feb 02 11:10:02 compute-0 blissful_wu[75007]: }
Feb 02 11:10:02 compute-0 systemd[1]: libpod-46ef5ba07722b8420fad876984baf245b797dcad39d47ef13ef4cf03f2968d1d.scope: Deactivated successfully.
Feb 02 11:10:02 compute-0 conmon[75007]: conmon 46ef5ba07722b8420fad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46ef5ba07722b8420fad876984baf245b797dcad39d47ef13ef4cf03f2968d1d.scope/container/memory.events
Feb 02 11:10:02 compute-0 podman[74970]: 2026-02-02 11:10:02.188118247 +0000 UTC m=+0.336333275 container died 46ef5ba07722b8420fad876984baf245b797dcad39d47ef13ef4cf03f2968d1d (image=quay.io/ceph/ceph:v19, name=blissful_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-194467afcc47b50e5aa26202d3b0f1d0e04be809d31d185a812173b60146e4b4-merged.mount: Deactivated successfully.
Feb 02 11:10:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/548172529' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:10:02 compute-0 podman[74970]: 2026-02-02 11:10:02.228290725 +0000 UTC m=+0.376505663 container remove 46ef5ba07722b8420fad876984baf245b797dcad39d47ef13ef4cf03f2968d1d (image=quay.io/ceph/ceph:v19, name=blissful_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:02 compute-0 systemd[1]: libpod-conmon-46ef5ba07722b8420fad876984baf245b797dcad39d47ef13ef4cf03f2968d1d.scope: Deactivated successfully.
Feb 02 11:10:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'crash'
Feb 02 11:10:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:02.935+0000 7fa879941140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:10:02 compute-0 ceph-mgr[74969]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:10:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'dashboard'
Feb 02 11:10:03 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'devicehealth'
Feb 02 11:10:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:03.599+0000 7fa879941140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:10:03 compute-0 ceph-mgr[74969]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:10:03 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'diskprediction_local'
Feb 02 11:10:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 02 11:10:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 02 11:10:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   from numpy import show_config as show_numpy_config
Feb 02 11:10:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:03.764+0000 7fa879941140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:10:03 compute-0 ceph-mgr[74969]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:10:03 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'influx'
Feb 02 11:10:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:03.841+0000 7fa879941140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:10:03 compute-0 ceph-mgr[74969]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:10:03 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'insights'
Feb 02 11:10:03 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'iostat'
Feb 02 11:10:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:04.003+0000 7fa879941140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:10:04 compute-0 ceph-mgr[74969]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:10:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'k8sevents'
Feb 02 11:10:04 compute-0 podman[75056]: 2026-02-02 11:10:04.289861874 +0000 UTC m=+0.039997693 container create 0ce0f6ee2446b961f914702c30c8c29444a7888dbd988318f33eeec41a870529 (image=quay.io/ceph/ceph:v19, name=sharp_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:10:04 compute-0 systemd[1]: Started libpod-conmon-0ce0f6ee2446b961f914702c30c8c29444a7888dbd988318f33eeec41a870529.scope.
Feb 02 11:10:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7431a3f0f4994f8c20ec40f291c62a866b5b7f15995f502ed56042fa3e610dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7431a3f0f4994f8c20ec40f291c62a866b5b7f15995f502ed56042fa3e610dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7431a3f0f4994f8c20ec40f291c62a866b5b7f15995f502ed56042fa3e610dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:04 compute-0 podman[75056]: 2026-02-02 11:10:04.359449711 +0000 UTC m=+0.109585540 container init 0ce0f6ee2446b961f914702c30c8c29444a7888dbd988318f33eeec41a870529 (image=quay.io/ceph/ceph:v19, name=sharp_poitras, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:04 compute-0 podman[75056]: 2026-02-02 11:10:04.363424615 +0000 UTC m=+0.113560434 container start 0ce0f6ee2446b961f914702c30c8c29444a7888dbd988318f33eeec41a870529 (image=quay.io/ceph/ceph:v19, name=sharp_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:04 compute-0 podman[75056]: 2026-02-02 11:10:04.366100211 +0000 UTC m=+0.116236180 container attach 0ce0f6ee2446b961f914702c30c8c29444a7888dbd988318f33eeec41a870529 (image=quay.io/ceph/ceph:v19, name=sharp_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:04 compute-0 podman[75056]: 2026-02-02 11:10:04.271567812 +0000 UTC m=+0.021703631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'localpool'
Feb 02 11:10:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mds_autoscaler'
Feb 02 11:10:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 02 11:10:04 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224087213' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:10:04 compute-0 sharp_poitras[75073]: 
Feb 02 11:10:04 compute-0 sharp_poitras[75073]: {
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "health": {
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "status": "HEALTH_OK",
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "checks": {},
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "mutes": []
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     },
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "election_epoch": 5,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "quorum": [
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         0
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     ],
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "quorum_names": [
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "compute-0"
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     ],
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "quorum_age": 4,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "monmap": {
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "epoch": 1,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "min_mon_release_name": "squid",
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "num_mons": 1
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     },
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "osdmap": {
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "epoch": 1,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "num_osds": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "num_up_osds": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "osd_up_since": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "num_in_osds": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "osd_in_since": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "num_remapped_pgs": 0
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     },
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "pgmap": {
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "pgs_by_state": [],
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "num_pgs": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "num_pools": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "num_objects": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "data_bytes": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "bytes_used": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "bytes_avail": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "bytes_total": 0
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     },
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "fsmap": {
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "epoch": 1,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "btime": "2026-02-02T11:09:58:610843+0000",
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "by_rank": [],
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "up:standby": 0
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     },
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "mgrmap": {
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "available": false,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "num_standbys": 0,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "modules": [
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:             "iostat",
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:             "nfs",
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:             "restful"
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         ],
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "services": {}
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     },
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "servicemap": {
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "epoch": 1,
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "modified": "2026-02-02T11:09:58.613481+0000",
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:         "services": {}
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     },
Feb 02 11:10:04 compute-0 sharp_poitras[75073]:     "progress_events": {}
Feb 02 11:10:04 compute-0 sharp_poitras[75073]: }
Feb 02 11:10:04 compute-0 systemd[1]: libpod-0ce0f6ee2446b961f914702c30c8c29444a7888dbd988318f33eeec41a870529.scope: Deactivated successfully.
Feb 02 11:10:04 compute-0 podman[75056]: 2026-02-02 11:10:04.552864714 +0000 UTC m=+0.303000523 container died 0ce0f6ee2446b961f914702c30c8c29444a7888dbd988318f33eeec41a870529 (image=quay.io/ceph/ceph:v19, name=sharp_poitras, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7431a3f0f4994f8c20ec40f291c62a866b5b7f15995f502ed56042fa3e610dc-merged.mount: Deactivated successfully.
Feb 02 11:10:04 compute-0 podman[75056]: 2026-02-02 11:10:04.578430995 +0000 UTC m=+0.328566804 container remove 0ce0f6ee2446b961f914702c30c8c29444a7888dbd988318f33eeec41a870529 (image=quay.io/ceph/ceph:v19, name=sharp_poitras, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:04 compute-0 systemd[1]: libpod-conmon-0ce0f6ee2446b961f914702c30c8c29444a7888dbd988318f33eeec41a870529.scope: Deactivated successfully.
Feb 02 11:10:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1224087213' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:10:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mirroring'
Feb 02 11:10:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'nfs'
Feb 02 11:10:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:05.072+0000 7fa879941140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'orchestrator'
Feb 02 11:10:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:05.297+0000 7fa879941140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_perf_query'
Feb 02 11:10:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:05.382+0000 7fa879941140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_support'
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:05.441+0000 7fa879941140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'pg_autoscaler'
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'progress'
Feb 02 11:10:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:05.517+0000 7fa879941140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'prometheus'
Feb 02 11:10:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:05.591+0000 7fa879941140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:10:05 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rbd_support'
Feb 02 11:10:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:05.941+0000 7fa879941140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:10:06 compute-0 ceph-mgr[74969]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:10:06 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'restful'
Feb 02 11:10:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:06.037+0000 7fa879941140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:10:06 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rgw'
Feb 02 11:10:06 compute-0 ceph-mgr[74969]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:10:06 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rook'
Feb 02 11:10:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:06.481+0000 7fa879941140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:10:06 compute-0 podman[75111]: 2026-02-02 11:10:06.628977819 +0000 UTC m=+0.033907479 container create a6b14f6ff971013e581526d55edf6211a85247b617c62e0a726c2b05425e1a29 (image=quay.io/ceph/ceph:v19, name=quirky_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:06 compute-0 systemd[1]: Started libpod-conmon-a6b14f6ff971013e581526d55edf6211a85247b617c62e0a726c2b05425e1a29.scope.
Feb 02 11:10:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0557a000a5c8889c14bbe827ff1541c73852b0861d14a4a63ebe57083eae64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0557a000a5c8889c14bbe827ff1541c73852b0861d14a4a63ebe57083eae64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0557a000a5c8889c14bbe827ff1541c73852b0861d14a4a63ebe57083eae64/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:06 compute-0 podman[75111]: 2026-02-02 11:10:06.690420374 +0000 UTC m=+0.095350064 container init a6b14f6ff971013e581526d55edf6211a85247b617c62e0a726c2b05425e1a29 (image=quay.io/ceph/ceph:v19, name=quirky_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:06 compute-0 podman[75111]: 2026-02-02 11:10:06.694071728 +0000 UTC m=+0.099001378 container start a6b14f6ff971013e581526d55edf6211a85247b617c62e0a726c2b05425e1a29 (image=quay.io/ceph/ceph:v19, name=quirky_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:06 compute-0 podman[75111]: 2026-02-02 11:10:06.69693213 +0000 UTC m=+0.101861790 container attach a6b14f6ff971013e581526d55edf6211a85247b617c62e0a726c2b05425e1a29 (image=quay.io/ceph/ceph:v19, name=quirky_chandrasekhar, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:10:06 compute-0 podman[75111]: 2026-02-02 11:10:06.616084201 +0000 UTC m=+0.021013891 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 02 11:10:06 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1337250247' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]: 
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]: {
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "health": {
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "status": "HEALTH_OK",
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "checks": {},
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "mutes": []
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     },
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "election_epoch": 5,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "quorum": [
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         0
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     ],
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "quorum_names": [
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "compute-0"
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     ],
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "quorum_age": 6,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "monmap": {
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "epoch": 1,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "min_mon_release_name": "squid",
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "num_mons": 1
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     },
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "osdmap": {
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "epoch": 1,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "num_osds": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "num_up_osds": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "osd_up_since": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "num_in_osds": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "osd_in_since": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "num_remapped_pgs": 0
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     },
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "pgmap": {
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "pgs_by_state": [],
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "num_pgs": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "num_pools": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "num_objects": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "data_bytes": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "bytes_used": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "bytes_avail": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "bytes_total": 0
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     },
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "fsmap": {
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "epoch": 1,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "btime": "2026-02-02T11:09:58:610843+0000",
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "by_rank": [],
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "up:standby": 0
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     },
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "mgrmap": {
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "available": false,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "num_standbys": 0,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "modules": [
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:             "iostat",
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:             "nfs",
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:             "restful"
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         ],
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "services": {}
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     },
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "servicemap": {
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "epoch": 1,
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "modified": "2026-02-02T11:09:58.613481+0000",
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:         "services": {}
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     },
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]:     "progress_events": {}
Feb 02 11:10:06 compute-0 quirky_chandrasekhar[75128]: }
Feb 02 11:10:06 compute-0 systemd[1]: libpod-a6b14f6ff971013e581526d55edf6211a85247b617c62e0a726c2b05425e1a29.scope: Deactivated successfully.
Feb 02 11:10:06 compute-0 podman[75111]: 2026-02-02 11:10:06.900954726 +0000 UTC m=+0.305884396 container died a6b14f6ff971013e581526d55edf6211a85247b617c62e0a726c2b05425e1a29 (image=quay.io/ceph/ceph:v19, name=quirky_chandrasekhar, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 11:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc0557a000a5c8889c14bbe827ff1541c73852b0861d14a4a63ebe57083eae64-merged.mount: Deactivated successfully.
Feb 02 11:10:06 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1337250247' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:10:06 compute-0 podman[75111]: 2026-02-02 11:10:06.939400814 +0000 UTC m=+0.344330464 container remove a6b14f6ff971013e581526d55edf6211a85247b617c62e0a726c2b05425e1a29 (image=quay.io/ceph/ceph:v19, name=quirky_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb 02 11:10:06 compute-0 systemd[1]: libpod-conmon-a6b14f6ff971013e581526d55edf6211a85247b617c62e0a726c2b05425e1a29.scope: Deactivated successfully.
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:07.063+0000 7fa879941140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'selftest'
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'snap_schedule'
Feb 02 11:10:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:07.143+0000 7fa879941140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'stats'
Feb 02 11:10:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:07.231+0000 7fa879941140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'status'
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telegraf'
Feb 02 11:10:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:07.390+0000 7fa879941140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telemetry'
Feb 02 11:10:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:07.473+0000 7fa879941140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'test_orchestrator'
Feb 02 11:10:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:07.643+0000 7fa879941140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:10:07 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'volumes'
Feb 02 11:10:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:07.882+0000 7fa879941140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'zabbix'
Feb 02 11:10:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:08.159+0000 7fa879941140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:10:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:08.228+0000 7fa879941140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: ms_deliver_dispatch: unhandled message 0x56475bd4a9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dhyzzj
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr handle_mgr_map Activating!
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr handle_mgr_map I am now activating
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.dhyzzj(active, starting, since 0.013575s)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e1 all = 1
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"} v 0)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: balancer
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [balancer INFO root] Starting
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: crash
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Manager daemon compute-0.dhyzzj is now available
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:10:08
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [balancer INFO root] No pools available
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: devicehealth
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: iostat
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Starting
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: nfs
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: orchestrator
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: pg_autoscaler
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: progress
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [progress INFO root] Loading...
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [progress INFO root] No stored events to load
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [progress INFO root] Loaded [] historic events
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [progress INFO root] Loaded OSDMap, ready.
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [rbd_support INFO root] recovery thread starting
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [rbd_support INFO root] starting setup
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: rbd_support
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: restful
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: status
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [restful INFO root] server_addr: :: server_port: 8003
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"} v 0)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: telemetry
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [restful WARNING root] server not running: no certificate configured
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [rbd_support INFO root] PerfHandler: starting
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TaskHandler: starting
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"} v 0)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: [rbd_support INFO root] setup complete
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: Activating manager daemon compute-0.dhyzzj
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mgrmap e2: compute-0.dhyzzj(active, starting, since 0.013575s)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: Manager daemon compute-0.dhyzzj is now available
Feb 02 11:10:08 compute-0 ceph-mon[74676]: from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mon[74676]: from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"}]: dispatch
Feb 02 11:10:08 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: volumes
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Feb 02 11:10:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:09 compute-0 podman[75248]: 2026-02-02 11:10:09.009609951 +0000 UTC m=+0.049997719 container create c9173f1c4e25db9306760a46ceb148afc2bdefa3954da949097deb71cd5f1ae4 (image=quay.io/ceph/ceph:v19, name=musing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:09 compute-0 systemd[1]: Started libpod-conmon-c9173f1c4e25db9306760a46ceb148afc2bdefa3954da949097deb71cd5f1ae4.scope.
Feb 02 11:10:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7700e1033b1b36e8ffadaa909a9609bd0c0ec7bab7339b92c9dda1b9697ed7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7700e1033b1b36e8ffadaa909a9609bd0c0ec7bab7339b92c9dda1b9697ed7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7700e1033b1b36e8ffadaa909a9609bd0c0ec7bab7339b92c9dda1b9697ed7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:09 compute-0 podman[75248]: 2026-02-02 11:10:09.083000217 +0000 UTC m=+0.123388025 container init c9173f1c4e25db9306760a46ceb148afc2bdefa3954da949097deb71cd5f1ae4 (image=quay.io/ceph/ceph:v19, name=musing_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 11:10:09 compute-0 podman[75248]: 2026-02-02 11:10:09.087542496 +0000 UTC m=+0.127930254 container start c9173f1c4e25db9306760a46ceb148afc2bdefa3954da949097deb71cd5f1ae4 (image=quay.io/ceph/ceph:v19, name=musing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:10:09 compute-0 podman[75248]: 2026-02-02 11:10:08.993943473 +0000 UTC m=+0.034331251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:09 compute-0 podman[75248]: 2026-02-02 11:10:09.090451699 +0000 UTC m=+0.130839507 container attach c9173f1c4e25db9306760a46ceb148afc2bdefa3954da949097deb71cd5f1ae4 (image=quay.io/ceph/ceph:v19, name=musing_ishizaka, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:10:09 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.dhyzzj(active, since 1.03323s)
Feb 02 11:10:09 compute-0 ceph-mon[74676]: from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:09 compute-0 ceph-mon[74676]: from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:09 compute-0 ceph-mon[74676]: from='mgr.14102 192.168.122.100:0/227651815' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:09 compute-0 ceph-mon[74676]: mgrmap e3: compute-0.dhyzzj(active, since 1.03323s)
Feb 02 11:10:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 02 11:10:09 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778334477' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]: 
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]: {
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "health": {
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "status": "HEALTH_OK",
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "checks": {},
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "mutes": []
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     },
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "election_epoch": 5,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "quorum": [
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         0
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     ],
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "quorum_names": [
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "compute-0"
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     ],
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "quorum_age": 8,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "monmap": {
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "epoch": 1,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "min_mon_release_name": "squid",
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "num_mons": 1
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     },
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "osdmap": {
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "epoch": 1,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "num_osds": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "num_up_osds": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "osd_up_since": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "num_in_osds": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "osd_in_since": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "num_remapped_pgs": 0
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     },
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "pgmap": {
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "pgs_by_state": [],
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "num_pgs": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "num_pools": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "num_objects": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "data_bytes": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "bytes_used": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "bytes_avail": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "bytes_total": 0
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     },
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "fsmap": {
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "epoch": 1,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "btime": "2026-02-02T11:09:58:610843+0000",
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "by_rank": [],
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "up:standby": 0
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     },
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "mgrmap": {
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "available": true,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "num_standbys": 0,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "modules": [
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:             "iostat",
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:             "nfs",
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:             "restful"
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         ],
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "services": {}
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     },
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "servicemap": {
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "epoch": 1,
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "modified": "2026-02-02T11:09:58.613481+0000",
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:         "services": {}
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     },
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]:     "progress_events": {}
Feb 02 11:10:09 compute-0 musing_ishizaka[75264]: }
Feb 02 11:10:09 compute-0 systemd[1]: libpod-c9173f1c4e25db9306760a46ceb148afc2bdefa3954da949097deb71cd5f1ae4.scope: Deactivated successfully.
Feb 02 11:10:09 compute-0 podman[75248]: 2026-02-02 11:10:09.338507193 +0000 UTC m=+0.378894961 container died c9173f1c4e25db9306760a46ceb148afc2bdefa3954da949097deb71cd5f1ae4 (image=quay.io/ceph/ceph:v19, name=musing_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 11:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca7700e1033b1b36e8ffadaa909a9609bd0c0ec7bab7339b92c9dda1b9697ed7-merged.mount: Deactivated successfully.
Feb 02 11:10:09 compute-0 podman[75248]: 2026-02-02 11:10:09.378984039 +0000 UTC m=+0.419371797 container remove c9173f1c4e25db9306760a46ceb148afc2bdefa3954da949097deb71cd5f1ae4 (image=quay.io/ceph/ceph:v19, name=musing_ishizaka, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:09 compute-0 systemd[1]: libpod-conmon-c9173f1c4e25db9306760a46ceb148afc2bdefa3954da949097deb71cd5f1ae4.scope: Deactivated successfully.
Feb 02 11:10:09 compute-0 podman[75303]: 2026-02-02 11:10:09.446779025 +0000 UTC m=+0.044859242 container create fed59fc219515a0c1d2a2debf9dcf0e286546dc27bf361b8a168ae5c9f961b47 (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:09 compute-0 systemd[1]: Started libpod-conmon-fed59fc219515a0c1d2a2debf9dcf0e286546dc27bf361b8a168ae5c9f961b47.scope.
Feb 02 11:10:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13da398e08547631a7c34853e4bbe7b8ebfb02f44d9c35ced5c0f1131fe21a6d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13da398e08547631a7c34853e4bbe7b8ebfb02f44d9c35ced5c0f1131fe21a6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13da398e08547631a7c34853e4bbe7b8ebfb02f44d9c35ced5c0f1131fe21a6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13da398e08547631a7c34853e4bbe7b8ebfb02f44d9c35ced5c0f1131fe21a6d/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:09 compute-0 podman[75303]: 2026-02-02 11:10:09.505621025 +0000 UTC m=+0.103701262 container init fed59fc219515a0c1d2a2debf9dcf0e286546dc27bf361b8a168ae5c9f961b47 (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:09 compute-0 podman[75303]: 2026-02-02 11:10:09.513428098 +0000 UTC m=+0.111508315 container start fed59fc219515a0c1d2a2debf9dcf0e286546dc27bf361b8a168ae5c9f961b47 (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:10:09 compute-0 podman[75303]: 2026-02-02 11:10:09.51630292 +0000 UTC m=+0.114383137 container attach fed59fc219515a0c1d2a2debf9dcf0e286546dc27bf361b8a168ae5c9f961b47 (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:09 compute-0 podman[75303]: 2026-02-02 11:10:09.432425685 +0000 UTC m=+0.030505932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 02 11:10:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2475386471' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb 02 11:10:09 compute-0 intelligent_lalande[75319]: 
Feb 02 11:10:09 compute-0 intelligent_lalande[75319]: [global]
Feb 02 11:10:09 compute-0 intelligent_lalande[75319]:         fsid = 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:09 compute-0 intelligent_lalande[75319]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb 02 11:10:09 compute-0 systemd[1]: libpod-fed59fc219515a0c1d2a2debf9dcf0e286546dc27bf361b8a168ae5c9f961b47.scope: Deactivated successfully.
Feb 02 11:10:09 compute-0 conmon[75319]: conmon fed59fc219515a0c1d2a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fed59fc219515a0c1d2a2debf9dcf0e286546dc27bf361b8a168ae5c9f961b47.scope/container/memory.events
Feb 02 11:10:09 compute-0 podman[75303]: 2026-02-02 11:10:09.863399651 +0000 UTC m=+0.461479868 container died fed59fc219515a0c1d2a2debf9dcf0e286546dc27bf361b8a168ae5c9f961b47 (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-13da398e08547631a7c34853e4bbe7b8ebfb02f44d9c35ced5c0f1131fe21a6d-merged.mount: Deactivated successfully.
Feb 02 11:10:09 compute-0 podman[75303]: 2026-02-02 11:10:09.901979002 +0000 UTC m=+0.500059229 container remove fed59fc219515a0c1d2a2debf9dcf0e286546dc27bf361b8a168ae5c9f961b47 (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:09 compute-0 systemd[1]: libpod-conmon-fed59fc219515a0c1d2a2debf9dcf0e286546dc27bf361b8a168ae5c9f961b47.scope: Deactivated successfully.
Feb 02 11:10:09 compute-0 podman[75357]: 2026-02-02 11:10:09.968013708 +0000 UTC m=+0.044677047 container create 4139ec1d07da7f8d5de5f66f007470d093b5a64a19e8393e8a5e2dfc1deef38a (image=quay.io/ceph/ceph:v19, name=elastic_varahamihira, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:10:09 compute-0 systemd[1]: Started libpod-conmon-4139ec1d07da7f8d5de5f66f007470d093b5a64a19e8393e8a5e2dfc1deef38a.scope.
Feb 02 11:10:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0b581ab157584bc245bb357ad067639155090653e59de6df8671c2375e8819b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0b581ab157584bc245bb357ad067639155090653e59de6df8671c2375e8819b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0b581ab157584bc245bb357ad067639155090653e59de6df8671c2375e8819b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:10 compute-0 podman[75357]: 2026-02-02 11:10:10.035544716 +0000 UTC m=+0.112208055 container init 4139ec1d07da7f8d5de5f66f007470d093b5a64a19e8393e8a5e2dfc1deef38a (image=quay.io/ceph/ceph:v19, name=elastic_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:10 compute-0 podman[75357]: 2026-02-02 11:10:09.943125917 +0000 UTC m=+0.019789336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:10 compute-0 podman[75357]: 2026-02-02 11:10:10.039348305 +0000 UTC m=+0.116011634 container start 4139ec1d07da7f8d5de5f66f007470d093b5a64a19e8393e8a5e2dfc1deef38a (image=quay.io/ceph/ceph:v19, name=elastic_varahamihira, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:10 compute-0 podman[75357]: 2026-02-02 11:10:10.059596603 +0000 UTC m=+0.136259952 container attach 4139ec1d07da7f8d5de5f66f007470d093b5a64a19e8393e8a5e2dfc1deef38a (image=quay.io/ceph/ceph:v19, name=elastic_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:10:10 compute-0 ceph-mgr[74969]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 11:10:10 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.dhyzzj(active, since 2s)
Feb 02 11:10:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3778334477' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:10:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2475386471' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb 02 11:10:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Feb 02 11:10:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/567613830' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Feb 02 11:10:11 compute-0 ceph-mon[74676]: mgrmap e4: compute-0.dhyzzj(active, since 2s)
Feb 02 11:10:11 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/567613830' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Feb 02 11:10:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/567613830' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  1: '-n'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  2: 'mgr.compute-0.dhyzzj'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  3: '-f'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  4: '--setuser'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  5: 'ceph'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  6: '--setgroup'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  7: 'ceph'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  8: '--default-log-to-file=false'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  9: '--default-log-to-journald=true'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  10: '--default-log-to-stderr=false'
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr respawn  exe_path /proc/self/exe
Feb 02 11:10:11 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.dhyzzj(active, since 3s)
Feb 02 11:10:11 compute-0 systemd[1]: libpod-4139ec1d07da7f8d5de5f66f007470d093b5a64a19e8393e8a5e2dfc1deef38a.scope: Deactivated successfully.
Feb 02 11:10:11 compute-0 conmon[75373]: conmon 4139ec1d07da7f8d5de5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4139ec1d07da7f8d5de5f66f007470d093b5a64a19e8393e8a5e2dfc1deef38a.scope/container/memory.events
Feb 02 11:10:11 compute-0 podman[75357]: 2026-02-02 11:10:11.355221651 +0000 UTC m=+1.431884980 container died 4139ec1d07da7f8d5de5f66f007470d093b5a64a19e8393e8a5e2dfc1deef38a (image=quay.io/ceph/ceph:v19, name=elastic_varahamihira, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0b581ab157584bc245bb357ad067639155090653e59de6df8671c2375e8819b-merged.mount: Deactivated successfully.
Feb 02 11:10:11 compute-0 podman[75357]: 2026-02-02 11:10:11.387820512 +0000 UTC m=+1.464483841 container remove 4139ec1d07da7f8d5de5f66f007470d093b5a64a19e8393e8a5e2dfc1deef38a (image=quay.io/ceph/ceph:v19, name=elastic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:10:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ignoring --setuser ceph since I am not root
Feb 02 11:10:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ignoring --setgroup ceph since I am not root
Feb 02 11:10:11 compute-0 systemd[1]: libpod-conmon-4139ec1d07da7f8d5de5f66f007470d093b5a64a19e8393e8a5e2dfc1deef38a.scope: Deactivated successfully.
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: pidfile_write: ignore empty --pid-file
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'alerts'
Feb 02 11:10:11 compute-0 podman[75414]: 2026-02-02 11:10:11.430776099 +0000 UTC m=+0.029356709 container create 8aae3df7e582d8d7911127d8b3049872167ce1383ae3feb258bdf3edb3310f7d (image=quay.io/ceph/ceph:v19, name=optimistic_jang, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:11 compute-0 systemd[1]: Started libpod-conmon-8aae3df7e582d8d7911127d8b3049872167ce1383ae3feb258bdf3edb3310f7d.scope.
Feb 02 11:10:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a801654431c3e81c19e3b91bb396788a1ea007666ddf7f65d302f67837ce28d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a801654431c3e81c19e3b91bb396788a1ea007666ddf7f65d302f67837ce28d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a801654431c3e81c19e3b91bb396788a1ea007666ddf7f65d302f67837ce28d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:10:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:11.513+0000 7fef79106140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'balancer'
Feb 02 11:10:11 compute-0 podman[75414]: 2026-02-02 11:10:11.514290274 +0000 UTC m=+0.112870904 container init 8aae3df7e582d8d7911127d8b3049872167ce1383ae3feb258bdf3edb3310f7d (image=quay.io/ceph/ceph:v19, name=optimistic_jang, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Feb 02 11:10:11 compute-0 podman[75414]: 2026-02-02 11:10:11.418005504 +0000 UTC m=+0.016586134 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:11 compute-0 podman[75414]: 2026-02-02 11:10:11.517607418 +0000 UTC m=+0.116188018 container start 8aae3df7e582d8d7911127d8b3049872167ce1383ae3feb258bdf3edb3310f7d (image=quay.io/ceph/ceph:v19, name=optimistic_jang, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:11 compute-0 podman[75414]: 2026-02-02 11:10:11.52012356 +0000 UTC m=+0.118704170 container attach 8aae3df7e582d8d7911127d8b3049872167ce1383ae3feb258bdf3edb3310f7d (image=quay.io/ceph/ceph:v19, name=optimistic_jang, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:10:11 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'cephadm'
Feb 02 11:10:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:11.592+0000 7fef79106140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:10:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 02 11:10:11 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062420162' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb 02 11:10:11 compute-0 optimistic_jang[75449]: {
Feb 02 11:10:11 compute-0 optimistic_jang[75449]:     "epoch": 5,
Feb 02 11:10:11 compute-0 optimistic_jang[75449]:     "available": true,
Feb 02 11:10:11 compute-0 optimistic_jang[75449]:     "active_name": "compute-0.dhyzzj",
Feb 02 11:10:11 compute-0 optimistic_jang[75449]:     "num_standby": 0
Feb 02 11:10:11 compute-0 optimistic_jang[75449]: }
Feb 02 11:10:11 compute-0 systemd[1]: libpod-8aae3df7e582d8d7911127d8b3049872167ce1383ae3feb258bdf3edb3310f7d.scope: Deactivated successfully.
Feb 02 11:10:11 compute-0 conmon[75449]: conmon 8aae3df7e582d8d79111 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8aae3df7e582d8d7911127d8b3049872167ce1383ae3feb258bdf3edb3310f7d.scope/container/memory.events
Feb 02 11:10:11 compute-0 podman[75414]: 2026-02-02 11:10:11.995057153 +0000 UTC m=+0.593637763 container died 8aae3df7e582d8d7911127d8b3049872167ce1383ae3feb258bdf3edb3310f7d (image=quay.io/ceph/ceph:v19, name=optimistic_jang, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a801654431c3e81c19e3b91bb396788a1ea007666ddf7f65d302f67837ce28d9-merged.mount: Deactivated successfully.
Feb 02 11:10:12 compute-0 podman[75414]: 2026-02-02 11:10:12.029414294 +0000 UTC m=+0.627994904 container remove 8aae3df7e582d8d7911127d8b3049872167ce1383ae3feb258bdf3edb3310f7d (image=quay.io/ceph/ceph:v19, name=optimistic_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:10:12 compute-0 systemd[1]: libpod-conmon-8aae3df7e582d8d7911127d8b3049872167ce1383ae3feb258bdf3edb3310f7d.scope: Deactivated successfully.
Feb 02 11:10:12 compute-0 podman[75493]: 2026-02-02 11:10:12.07657478 +0000 UTC m=+0.032637673 container create 1219322d0d5414e1e23128ab4e2167dadd5e950d38d1c9c583b59ea8a2ff0eb4 (image=quay.io/ceph/ceph:v19, name=modest_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:10:12 compute-0 systemd[1]: Started libpod-conmon-1219322d0d5414e1e23128ab4e2167dadd5e950d38d1c9c583b59ea8a2ff0eb4.scope.
Feb 02 11:10:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1922228a8bdd2dc9a5bd8b8456516a1e99dad8c9bbd4855218d48b1f8a989836/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1922228a8bdd2dc9a5bd8b8456516a1e99dad8c9bbd4855218d48b1f8a989836/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1922228a8bdd2dc9a5bd8b8456516a1e99dad8c9bbd4855218d48b1f8a989836/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:12 compute-0 podman[75493]: 2026-02-02 11:10:12.139045084 +0000 UTC m=+0.095107997 container init 1219322d0d5414e1e23128ab4e2167dadd5e950d38d1c9c583b59ea8a2ff0eb4 (image=quay.io/ceph/ceph:v19, name=modest_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:10:12 compute-0 podman[75493]: 2026-02-02 11:10:12.142603256 +0000 UTC m=+0.098666149 container start 1219322d0d5414e1e23128ab4e2167dadd5e950d38d1c9c583b59ea8a2ff0eb4 (image=quay.io/ceph/ceph:v19, name=modest_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:12 compute-0 podman[75493]: 2026-02-02 11:10:12.157281085 +0000 UTC m=+0.113343998 container attach 1219322d0d5414e1e23128ab4e2167dadd5e950d38d1c9c583b59ea8a2ff0eb4 (image=quay.io/ceph/ceph:v19, name=modest_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:10:12 compute-0 podman[75493]: 2026-02-02 11:10:12.062134598 +0000 UTC m=+0.018197511 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:12 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/567613830' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb 02 11:10:12 compute-0 ceph-mon[74676]: mgrmap e5: compute-0.dhyzzj(active, since 3s)
Feb 02 11:10:12 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2062420162' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb 02 11:10:12 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'crash'
Feb 02 11:10:12 compute-0 ceph-mgr[74969]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:10:12 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'dashboard'
Feb 02 11:10:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:12.490+0000 7fef79106140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'devicehealth'
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'diskprediction_local'
Feb 02 11:10:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:13.139+0000 7fef79106140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:10:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 02 11:10:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 02 11:10:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   from numpy import show_config as show_numpy_config
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'influx'
Feb 02 11:10:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:13.311+0000 7fef79106140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:10:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:13.385+0000 7fef79106140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'insights'
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'iostat'
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'k8sevents'
Feb 02 11:10:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:13.529+0000 7fef79106140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'localpool'
Feb 02 11:10:13 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mds_autoscaler'
Feb 02 11:10:14 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mirroring'
Feb 02 11:10:14 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'nfs'
Feb 02 11:10:14 compute-0 ceph-mgr[74969]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:10:14 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'orchestrator'
Feb 02 11:10:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:14.550+0000 7fef79106140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:10:14 compute-0 ceph-mgr[74969]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:10:14 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_perf_query'
Feb 02 11:10:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:14.786+0000 7fef79106140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:10:14 compute-0 ceph-mgr[74969]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:10:14 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_support'
Feb 02 11:10:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:14.866+0000 7fef79106140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:10:14 compute-0 ceph-mgr[74969]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:10:14 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'pg_autoscaler'
Feb 02 11:10:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:14.935+0000 7fef79106140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:10:15 compute-0 ceph-mgr[74969]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:10:15 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'progress'
Feb 02 11:10:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:15.020+0000 7fef79106140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:10:15 compute-0 ceph-mgr[74969]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:10:15 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'prometheus'
Feb 02 11:10:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:15.102+0000 7fef79106140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:10:15 compute-0 ceph-mgr[74969]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:10:15 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rbd_support'
Feb 02 11:10:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:15.460+0000 7fef79106140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:10:15 compute-0 ceph-mgr[74969]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:10:15 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'restful'
Feb 02 11:10:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:15.565+0000 7fef79106140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:10:15 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rgw'
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rook'
Feb 02 11:10:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:16.012+0000 7fef79106140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'selftest'
Feb 02 11:10:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:16.614+0000 7fef79106140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'snap_schedule'
Feb 02 11:10:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:16.689+0000 7fef79106140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'stats'
Feb 02 11:10:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:16.771+0000 7fef79106140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'status'
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telegraf'
Feb 02 11:10:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:16.925+0000 7fef79106140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:10:16 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telemetry'
Feb 02 11:10:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:16.993+0000 7fef79106140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'test_orchestrator'
Feb 02 11:10:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:17.151+0000 7fef79106140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'volumes'
Feb 02 11:10:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:17.380+0000 7fef79106140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'zabbix'
Feb 02 11:10:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:17.652+0000 7fef79106140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:10:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:10:17.727+0000 7fef79106140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dhyzzj restarted
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dhyzzj
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: ms_deliver_dispatch: unhandled message 0x556403576d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr handle_mgr_map Activating!
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr handle_mgr_map I am now activating
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.dhyzzj(active, starting, since 0.0178119s)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"} v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e1 all = 1
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: balancer
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [balancer INFO root] Starting
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Manager daemon compute-0.dhyzzj is now available
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:10:17
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [balancer INFO root] No pools available
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: Active manager daemon compute-0.dhyzzj restarted
Feb 02 11:10:17 compute-0 ceph-mon[74676]: Activating manager daemon compute-0.dhyzzj
Feb 02 11:10:17 compute-0 ceph-mon[74676]: osdmap e2: 0 total, 0 up, 0 in
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mgrmap e6: compute-0.dhyzzj(active, starting, since 0.0178119s)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mon[74676]: Manager daemon compute-0.dhyzzj is now available
Feb 02 11:10:17 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: cephadm
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: crash
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: devicehealth
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Starting
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: iostat
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: nfs
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: orchestrator
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: pg_autoscaler
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: progress
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [progress INFO root] Loading...
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [progress INFO root] No stored events to load
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [progress INFO root] Loaded [] historic events
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [progress INFO root] Loaded OSDMap, ready.
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] recovery thread starting
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] starting setup
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: rbd_support
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: restful
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"} v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: status
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [restful INFO root] server_addr: :: server_port: 8003
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] PerfHandler: starting
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TaskHandler: starting
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [restful WARNING root] server not running: no certificate configured
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: telemetry
Feb 02 11:10:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"} v 0)
Feb 02 11:10:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"}]: dispatch
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] setup complete
Feb 02 11:10:17 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: volumes
Feb 02 11:10:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Feb 02 11:10:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Feb 02 11:10:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.dhyzzj(active, since 1.0264s)
Feb 02 11:10:18 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb 02 11:10:18 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb 02 11:10:18 compute-0 modest_knuth[75514]: {
Feb 02 11:10:18 compute-0 modest_knuth[75514]:     "mgrmap_epoch": 7,
Feb 02 11:10:18 compute-0 modest_knuth[75514]:     "initialized": true
Feb 02 11:10:18 compute-0 modest_knuth[75514]: }
Feb 02 11:10:18 compute-0 systemd[1]: libpod-1219322d0d5414e1e23128ab4e2167dadd5e950d38d1c9c583b59ea8a2ff0eb4.scope: Deactivated successfully.
Feb 02 11:10:18 compute-0 podman[75493]: 2026-02-02 11:10:18.790483591 +0000 UTC m=+6.746546514 container died 1219322d0d5414e1e23128ab4e2167dadd5e950d38d1c9c583b59ea8a2ff0eb4 (image=quay.io/ceph/ceph:v19, name=modest_knuth, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:10:18 compute-0 ceph-mon[74676]: Found migration_current of "None". Setting to last migration.
Feb 02 11:10:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:10:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:10:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"}]: dispatch
Feb 02 11:10:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"}]: dispatch
Feb 02 11:10:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:18 compute-0 ceph-mon[74676]: mgrmap e7: compute-0.dhyzzj(active, since 1.0264s)
Feb 02 11:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1922228a8bdd2dc9a5bd8b8456516a1e99dad8c9bbd4855218d48b1f8a989836-merged.mount: Deactivated successfully.
Feb 02 11:10:18 compute-0 podman[75493]: 2026-02-02 11:10:18.825604324 +0000 UTC m=+6.781667247 container remove 1219322d0d5414e1e23128ab4e2167dadd5e950d38d1c9c583b59ea8a2ff0eb4 (image=quay.io/ceph/ceph:v19, name=modest_knuth, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:18 compute-0 systemd[1]: libpod-conmon-1219322d0d5414e1e23128ab4e2167dadd5e950d38d1c9c583b59ea8a2ff0eb4.scope: Deactivated successfully.
Feb 02 11:10:18 compute-0 podman[75661]: 2026-02-02 11:10:18.880276205 +0000 UTC m=+0.037868662 container create 53c4ecd9921295816a97354fd8aa3027b106d2fa286ad234cf1695b46b6976f8 (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:10:18 compute-0 systemd[1]: Started libpod-conmon-53c4ecd9921295816a97354fd8aa3027b106d2fa286ad234cf1695b46b6976f8.scope.
Feb 02 11:10:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05be26b783157c24fd5459161e5a51a79d25a1be964c5339822c8284115d66c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05be26b783157c24fd5459161e5a51a79d25a1be964c5339822c8284115d66c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05be26b783157c24fd5459161e5a51a79d25a1be964c5339822c8284115d66c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:18 compute-0 podman[75661]: 2026-02-02 11:10:18.942835261 +0000 UTC m=+0.100427728 container init 53c4ecd9921295816a97354fd8aa3027b106d2fa286ad234cf1695b46b6976f8 (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:10:18 compute-0 podman[75661]: 2026-02-02 11:10:18.946894587 +0000 UTC m=+0.104487034 container start 53c4ecd9921295816a97354fd8aa3027b106d2fa286ad234cf1695b46b6976f8 (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:10:18 compute-0 podman[75661]: 2026-02-02 11:10:18.951349315 +0000 UTC m=+0.108941782 container attach 53c4ecd9921295816a97354fd8aa3027b106d2fa286ad234cf1695b46b6976f8 (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:10:18 compute-0 podman[75661]: 2026-02-02 11:10:18.862677052 +0000 UTC m=+0.020269509 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Feb 02 11:10:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 11:10:19 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:10:19 compute-0 systemd[1]: libpod-53c4ecd9921295816a97354fd8aa3027b106d2fa286ad234cf1695b46b6976f8.scope: Deactivated successfully.
Feb 02 11:10:19 compute-0 podman[75661]: 2026-02-02 11:10:19.31187603 +0000 UTC m=+0.469468487 container died 53c4ecd9921295816a97354fd8aa3027b106d2fa286ad234cf1695b46b6976f8 (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b05be26b783157c24fd5459161e5a51a79d25a1be964c5339822c8284115d66c-merged.mount: Deactivated successfully.
Feb 02 11:10:19 compute-0 podman[75661]: 2026-02-02 11:10:19.339925781 +0000 UTC m=+0.497518228 container remove 53c4ecd9921295816a97354fd8aa3027b106d2fa286ad234cf1695b46b6976f8 (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:10:19 compute-0 systemd[1]: libpod-conmon-53c4ecd9921295816a97354fd8aa3027b106d2fa286ad234cf1695b46b6976f8.scope: Deactivated successfully.
Feb 02 11:10:19 compute-0 podman[75718]: 2026-02-02 11:10:19.386374787 +0000 UTC m=+0.030961195 container create a1c2655a55ebc82f4d81fab11c1f2320690b30ac82ec6e20e141e12ce38f0dce (image=quay.io/ceph/ceph:v19, name=laughing_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:10:19] ENGINE Bus STARTING
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:10:19] ENGINE Bus STARTING
Feb 02 11:10:19 compute-0 systemd[1]: Started libpod-conmon-a1c2655a55ebc82f4d81fab11c1f2320690b30ac82ec6e20e141e12ce38f0dce.scope.
Feb 02 11:10:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4d5c1bc13b0ad3f680663245a2c065c8c3e1771f76c6521f7ca570c93251a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4d5c1bc13b0ad3f680663245a2c065c8c3e1771f76c6521f7ca570c93251a4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4d5c1bc13b0ad3f680663245a2c065c8c3e1771f76c6521f7ca570c93251a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:19 compute-0 podman[75718]: 2026-02-02 11:10:19.443577401 +0000 UTC m=+0.088163819 container init a1c2655a55ebc82f4d81fab11c1f2320690b30ac82ec6e20e141e12ce38f0dce (image=quay.io/ceph/ceph:v19, name=laughing_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:19 compute-0 podman[75718]: 2026-02-02 11:10:19.449590152 +0000 UTC m=+0.094176570 container start a1c2655a55ebc82f4d81fab11c1f2320690b30ac82ec6e20e141e12ce38f0dce (image=quay.io/ceph/ceph:v19, name=laughing_mccarthy, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb 02 11:10:19 compute-0 podman[75718]: 2026-02-02 11:10:19.453901475 +0000 UTC m=+0.098487893 container attach a1c2655a55ebc82f4d81fab11c1f2320690b30ac82ec6e20e141e12ce38f0dce (image=quay.io/ceph/ceph:v19, name=laughing_mccarthy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:10:19 compute-0 podman[75718]: 2026-02-02 11:10:19.372728048 +0000 UTC m=+0.017314476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:10:19] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:10:19] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:10:19] ENGINE Client ('192.168.122.100', 38618) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:10:19] ENGINE Client ('192.168.122.100', 38618) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:10:19] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:10:19] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:10:19] ENGINE Bus STARTED
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:10:19] ENGINE Bus STARTED
Feb 02 11:10:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 11:10:19 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Feb 02 11:10:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: [cephadm INFO root] Set ssh ssh_user
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Feb 02 11:10:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Feb 02 11:10:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: [cephadm INFO root] Set ssh ssh_config
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Feb 02 11:10:19 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Feb 02 11:10:19 compute-0 laughing_mccarthy[75736]: ssh user set to ceph-admin. sudo will be used
Feb 02 11:10:19 compute-0 systemd[1]: libpod-a1c2655a55ebc82f4d81fab11c1f2320690b30ac82ec6e20e141e12ce38f0dce.scope: Deactivated successfully.
Feb 02 11:10:19 compute-0 conmon[75736]: conmon a1c2655a55ebc82f4d81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1c2655a55ebc82f4d81fab11c1f2320690b30ac82ec6e20e141e12ce38f0dce.scope/container/memory.events
Feb 02 11:10:19 compute-0 podman[75718]: 2026-02-02 11:10:19.832079545 +0000 UTC m=+0.476665953 container died a1c2655a55ebc82f4d81fab11c1f2320690b30ac82ec6e20e141e12ce38f0dce (image=quay.io/ceph/ceph:v19, name=laughing_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb 02 11:10:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d4d5c1bc13b0ad3f680663245a2c065c8c3e1771f76c6521f7ca570c93251a4-merged.mount: Deactivated successfully.
Feb 02 11:10:19 compute-0 podman[75718]: 2026-02-02 11:10:19.870297896 +0000 UTC m=+0.514884324 container remove a1c2655a55ebc82f4d81fab11c1f2320690b30ac82ec6e20e141e12ce38f0dce (image=quay.io/ceph/ceph:v19, name=laughing_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb 02 11:10:19 compute-0 systemd[1]: libpod-conmon-a1c2655a55ebc82f4d81fab11c1f2320690b30ac82ec6e20e141e12ce38f0dce.scope: Deactivated successfully.
Feb 02 11:10:19 compute-0 podman[75796]: 2026-02-02 11:10:19.926561083 +0000 UTC m=+0.040510718 container create 8b37839e2e45bb5663b3fb9cf85d59da56e999daddcc81465959084f2944f777 (image=quay.io/ceph/ceph:v19, name=magical_bartik, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:10:19 compute-0 systemd[1]: Started libpod-conmon-8b37839e2e45bb5663b3fb9cf85d59da56e999daddcc81465959084f2944f777.scope.
Feb 02 11:10:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc9d94e058a3e4e192a74c9c8d11b7ea24733614b8aa3a1d3de65ecf23e0c558/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc9d94e058a3e4e192a74c9c8d11b7ea24733614b8aa3a1d3de65ecf23e0c558/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc9d94e058a3e4e192a74c9c8d11b7ea24733614b8aa3a1d3de65ecf23e0c558/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc9d94e058a3e4e192a74c9c8d11b7ea24733614b8aa3a1d3de65ecf23e0c558/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc9d94e058a3e4e192a74c9c8d11b7ea24733614b8aa3a1d3de65ecf23e0c558/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:19 compute-0 podman[75796]: 2026-02-02 11:10:19.99510255 +0000 UTC m=+0.109052195 container init 8b37839e2e45bb5663b3fb9cf85d59da56e999daddcc81465959084f2944f777 (image=quay.io/ceph/ceph:v19, name=magical_bartik, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Feb 02 11:10:20 compute-0 podman[75796]: 2026-02-02 11:10:20.00035777 +0000 UTC m=+0.114307415 container start 8b37839e2e45bb5663b3fb9cf85d59da56e999daddcc81465959084f2944f777 (image=quay.io/ceph/ceph:v19, name=magical_bartik, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:20 compute-0 podman[75796]: 2026-02-02 11:10:19.908121486 +0000 UTC m=+0.022071151 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:20 compute-0 podman[75796]: 2026-02-02 11:10:20.003659395 +0000 UTC m=+0.117609030 container attach 8b37839e2e45bb5663b3fb9cf85d59da56e999daddcc81465959084f2944f777 (image=quay.io/ceph/ceph:v19, name=magical_bartik, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:20 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.dhyzzj(active, since 2s)
Feb 02 11:10:20 compute-0 ceph-mon[74676]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb 02 11:10:20 compute-0 ceph-mon[74676]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb 02 11:10:20 compute-0 ceph-mon[74676]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:10:20 compute-0 ceph-mon[74676]: [02/Feb/2026:11:10:19] ENGINE Bus STARTING
Feb 02 11:10:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:10:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:20 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Feb 02 11:10:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:20 compute-0 ceph-mgr[74969]: [cephadm INFO root] Set ssh ssh_identity_key
Feb 02 11:10:20 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Feb 02 11:10:20 compute-0 ceph-mgr[74969]: [cephadm INFO root] Set ssh private key
Feb 02 11:10:20 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Set ssh private key
Feb 02 11:10:20 compute-0 systemd[1]: libpod-8b37839e2e45bb5663b3fb9cf85d59da56e999daddcc81465959084f2944f777.scope: Deactivated successfully.
Feb 02 11:10:20 compute-0 podman[75796]: 2026-02-02 11:10:20.363790758 +0000 UTC m=+0.477740463 container died 8b37839e2e45bb5663b3fb9cf85d59da56e999daddcc81465959084f2944f777 (image=quay.io/ceph/ceph:v19, name=magical_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc9d94e058a3e4e192a74c9c8d11b7ea24733614b8aa3a1d3de65ecf23e0c558-merged.mount: Deactivated successfully.
Feb 02 11:10:20 compute-0 podman[75796]: 2026-02-02 11:10:20.396159413 +0000 UTC m=+0.510109048 container remove 8b37839e2e45bb5663b3fb9cf85d59da56e999daddcc81465959084f2944f777 (image=quay.io/ceph/ceph:v19, name=magical_bartik, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 11:10:20 compute-0 systemd[1]: libpod-conmon-8b37839e2e45bb5663b3fb9cf85d59da56e999daddcc81465959084f2944f777.scope: Deactivated successfully.
Feb 02 11:10:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019922922 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:10:20 compute-0 podman[75849]: 2026-02-02 11:10:20.453115738 +0000 UTC m=+0.038705585 container create 00571dc430febc72706184faefb1011883e762f3a886e5075e4aa5cfcdd433c3 (image=quay.io/ceph/ceph:v19, name=elastic_turing, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:10:20 compute-0 systemd[1]: Started libpod-conmon-00571dc430febc72706184faefb1011883e762f3a886e5075e4aa5cfcdd433c3.scope.
Feb 02 11:10:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6afc019d924689b282458a6a8df806661eec8492e955b7d69a7ef3fdf9e7c93a/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6afc019d924689b282458a6a8df806661eec8492e955b7d69a7ef3fdf9e7c93a/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6afc019d924689b282458a6a8df806661eec8492e955b7d69a7ef3fdf9e7c93a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6afc019d924689b282458a6a8df806661eec8492e955b7d69a7ef3fdf9e7c93a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6afc019d924689b282458a6a8df806661eec8492e955b7d69a7ef3fdf9e7c93a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:20 compute-0 podman[75849]: 2026-02-02 11:10:20.516302973 +0000 UTC m=+0.101892820 container init 00571dc430febc72706184faefb1011883e762f3a886e5075e4aa5cfcdd433c3 (image=quay.io/ceph/ceph:v19, name=elastic_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:10:20 compute-0 podman[75849]: 2026-02-02 11:10:20.524298351 +0000 UTC m=+0.109888188 container start 00571dc430febc72706184faefb1011883e762f3a886e5075e4aa5cfcdd433c3 (image=quay.io/ceph/ceph:v19, name=elastic_turing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb 02 11:10:20 compute-0 podman[75849]: 2026-02-02 11:10:20.528224133 +0000 UTC m=+0.113813970 container attach 00571dc430febc72706184faefb1011883e762f3a886e5075e4aa5cfcdd433c3 (image=quay.io/ceph/ceph:v19, name=elastic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:20 compute-0 podman[75849]: 2026-02-02 11:10:20.434733134 +0000 UTC m=+0.020322991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:20 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Feb 02 11:10:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:20 compute-0 ceph-mgr[74969]: [cephadm INFO root] Set ssh ssh_identity_pub
Feb 02 11:10:20 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Feb 02 11:10:20 compute-0 systemd[1]: libpod-00571dc430febc72706184faefb1011883e762f3a886e5075e4aa5cfcdd433c3.scope: Deactivated successfully.
Feb 02 11:10:20 compute-0 podman[75849]: 2026-02-02 11:10:20.923971934 +0000 UTC m=+0.509561781 container died 00571dc430febc72706184faefb1011883e762f3a886e5075e4aa5cfcdd433c3 (image=quay.io/ceph/ceph:v19, name=elastic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6afc019d924689b282458a6a8df806661eec8492e955b7d69a7ef3fdf9e7c93a-merged.mount: Deactivated successfully.
Feb 02 11:10:20 compute-0 podman[75849]: 2026-02-02 11:10:20.970533724 +0000 UTC m=+0.556123561 container remove 00571dc430febc72706184faefb1011883e762f3a886e5075e4aa5cfcdd433c3 (image=quay.io/ceph/ceph:v19, name=elastic_turing, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:10:20 compute-0 systemd[1]: libpod-conmon-00571dc430febc72706184faefb1011883e762f3a886e5075e4aa5cfcdd433c3.scope: Deactivated successfully.
Feb 02 11:10:21 compute-0 podman[75903]: 2026-02-02 11:10:21.060979987 +0000 UTC m=+0.059676465 container create 904b869cb04dc23ff40683d8e940592657dbf0cb12ed71b2f126999810524e18 (image=quay.io/ceph/ceph:v19, name=elastic_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:10:21 compute-0 systemd[1]: Started libpod-conmon-904b869cb04dc23ff40683d8e940592657dbf0cb12ed71b2f126999810524e18.scope.
Feb 02 11:10:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfaa3bd6c155d8e0d9871a0be8dbe1ef61750a2f540235d3572cc4d941427c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfaa3bd6c155d8e0d9871a0be8dbe1ef61750a2f540235d3572cc4d941427c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfaa3bd6c155d8e0d9871a0be8dbe1ef61750a2f540235d3572cc4d941427c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:21 compute-0 podman[75903]: 2026-02-02 11:10:21.043367894 +0000 UTC m=+0.042064152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:21 compute-0 podman[75903]: 2026-02-02 11:10:21.145789178 +0000 UTC m=+0.144485436 container init 904b869cb04dc23ff40683d8e940592657dbf0cb12ed71b2f126999810524e18 (image=quay.io/ceph/ceph:v19, name=elastic_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:10:21 compute-0 podman[75903]: 2026-02-02 11:10:21.152190771 +0000 UTC m=+0.150887019 container start 904b869cb04dc23ff40683d8e940592657dbf0cb12ed71b2f126999810524e18 (image=quay.io/ceph/ceph:v19, name=elastic_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:10:21 compute-0 podman[75903]: 2026-02-02 11:10:21.156217416 +0000 UTC m=+0.154913674 container attach 904b869cb04dc23ff40683d8e940592657dbf0cb12ed71b2f126999810524e18 (image=quay.io/ceph/ceph:v19, name=elastic_shannon, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 11:10:21 compute-0 ceph-mon[74676]: [02/Feb/2026:11:10:19] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:10:21 compute-0 ceph-mon[74676]: [02/Feb/2026:11:10:19] ENGINE Client ('192.168.122.100', 38618) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:10:21 compute-0 ceph-mon[74676]: [02/Feb/2026:11:10:19] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:10:21 compute-0 ceph-mon[74676]: [02/Feb/2026:11:10:19] ENGINE Bus STARTED
Feb 02 11:10:21 compute-0 ceph-mon[74676]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:21 compute-0 ceph-mon[74676]: Set ssh ssh_user
Feb 02 11:10:21 compute-0 ceph-mon[74676]: Set ssh ssh_config
Feb 02 11:10:21 compute-0 ceph-mon[74676]: ssh user set to ceph-admin. sudo will be used
Feb 02 11:10:21 compute-0 ceph-mon[74676]: mgrmap e8: compute-0.dhyzzj(active, since 2s)
Feb 02 11:10:21 compute-0 ceph-mon[74676]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:21 compute-0 ceph-mon[74676]: Set ssh ssh_identity_key
Feb 02 11:10:21 compute-0 ceph-mon[74676]: Set ssh private key
Feb 02 11:10:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:21 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:21 compute-0 elastic_shannon[75920]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZ8IgcHQMJ6yabv8NclHfuiTdrKjDHVt1tGxP6+GNXxlbb80rvIElJ6dl8jCQ6pM9XEwz9ztZ1jGMmdvtgCjm2Lc4BRR6tY+8uNV7iEamQ/k5v4cKMxIZ417ZAHYmtR13Slb4A84hylq+KF0rNgoFpAwCxlxVQylxKb3EOAF2yFUVlbetC+wjiMj8ec4EGNP11fdeGf35qcurElr3IWVXHyzigCsFEF37h9TCbExyGTIkizolTWSgshTwGFFkcGr9H1HqPqzXbiW+FfatsWYkTyh3WUPS1nbwTnFfTvxPvkPFeqStC/osiFkvXtI4UpU3lbBMLbPkv0arsWezZfxILA/q8W78vkru/ZHcJgg/UOQ1mf1RJ0JjHK/+YCYXucLn4NCFi0GRT+X6JvzrCoTyt+FrQ/1uXKHjHNAi3zXa9XM7l4W1H+FWL2fQOT8iFV0kLtXNneDzcF4WHgjh4sLK6ngbIo5xLXXYFD8VN+2X6ksCIvp4uKRPdPNDW5nsigT0= zuul@controller
Feb 02 11:10:21 compute-0 systemd[1]: libpod-904b869cb04dc23ff40683d8e940592657dbf0cb12ed71b2f126999810524e18.scope: Deactivated successfully.
Feb 02 11:10:21 compute-0 podman[75903]: 2026-02-02 11:10:21.509103283 +0000 UTC m=+0.507799521 container died 904b869cb04dc23ff40683d8e940592657dbf0cb12ed71b2f126999810524e18 (image=quay.io/ceph/ceph:v19, name=elastic_shannon, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccfaa3bd6c155d8e0d9871a0be8dbe1ef61750a2f540235d3572cc4d941427c4-merged.mount: Deactivated successfully.
Feb 02 11:10:21 compute-0 podman[75903]: 2026-02-02 11:10:21.538424301 +0000 UTC m=+0.537120539 container remove 904b869cb04dc23ff40683d8e940592657dbf0cb12ed71b2f126999810524e18 (image=quay.io/ceph/ceph:v19, name=elastic_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:10:21 compute-0 systemd[1]: libpod-conmon-904b869cb04dc23ff40683d8e940592657dbf0cb12ed71b2f126999810524e18.scope: Deactivated successfully.
Feb 02 11:10:21 compute-0 podman[75957]: 2026-02-02 11:10:21.58777941 +0000 UTC m=+0.031418778 container create aaa4d155b2430148b678f5341a947d3f62e5048a24a07261888778ac1187cee0 (image=quay.io/ceph/ceph:v19, name=practical_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:21 compute-0 systemd[1]: Started libpod-conmon-aaa4d155b2430148b678f5341a947d3f62e5048a24a07261888778ac1187cee0.scope.
Feb 02 11:10:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adaf3b5b8e4e9bd206effd5863b914c2c1a866e9740fe1f591046909a603244/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adaf3b5b8e4e9bd206effd5863b914c2c1a866e9740fe1f591046909a603244/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adaf3b5b8e4e9bd206effd5863b914c2c1a866e9740fe1f591046909a603244/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:21 compute-0 podman[75957]: 2026-02-02 11:10:21.647287609 +0000 UTC m=+0.090926987 container init aaa4d155b2430148b678f5341a947d3f62e5048a24a07261888778ac1187cee0 (image=quay.io/ceph/ceph:v19, name=practical_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:21 compute-0 podman[75957]: 2026-02-02 11:10:21.651821959 +0000 UTC m=+0.095461337 container start aaa4d155b2430148b678f5341a947d3f62e5048a24a07261888778ac1187cee0 (image=quay.io/ceph/ceph:v19, name=practical_dubinsky, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:21 compute-0 podman[75957]: 2026-02-02 11:10:21.654923907 +0000 UTC m=+0.098563295 container attach aaa4d155b2430148b678f5341a947d3f62e5048a24a07261888778ac1187cee0 (image=quay.io/ceph/ceph:v19, name=practical_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb 02 11:10:21 compute-0 podman[75957]: 2026-02-02 11:10:21.572819693 +0000 UTC m=+0.016459081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:21 compute-0 ceph-mgr[74969]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 11:10:22 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:22 compute-0 sshd-session[76000]: Accepted publickey for ceph-admin from 192.168.122.100 port 43834 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:22 compute-0 systemd-logind[793]: New session 21 of user ceph-admin.
Feb 02 11:10:22 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Feb 02 11:10:22 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 02 11:10:22 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 02 11:10:22 compute-0 systemd[1]: Starting User Manager for UID 42477...
Feb 02 11:10:22 compute-0 systemd[76004]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:22 compute-0 ceph-mon[74676]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:22 compute-0 ceph-mon[74676]: Set ssh ssh_identity_pub
Feb 02 11:10:22 compute-0 systemd[76004]: Queued start job for default target Main User Target.
Feb 02 11:10:22 compute-0 systemd[76004]: Created slice User Application Slice.
Feb 02 11:10:22 compute-0 systemd[76004]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 02 11:10:22 compute-0 systemd[76004]: Started Daily Cleanup of User's Temporary Directories.
Feb 02 11:10:22 compute-0 systemd[76004]: Reached target Paths.
Feb 02 11:10:22 compute-0 systemd[76004]: Reached target Timers.
Feb 02 11:10:22 compute-0 systemd[76004]: Starting D-Bus User Message Bus Socket...
Feb 02 11:10:22 compute-0 systemd[76004]: Starting Create User's Volatile Files and Directories...
Feb 02 11:10:22 compute-0 systemd[76004]: Listening on D-Bus User Message Bus Socket.
Feb 02 11:10:22 compute-0 systemd[76004]: Reached target Sockets.
Feb 02 11:10:22 compute-0 systemd[76004]: Finished Create User's Volatile Files and Directories.
Feb 02 11:10:22 compute-0 systemd[76004]: Reached target Basic System.
Feb 02 11:10:22 compute-0 systemd[76004]: Reached target Main User Target.
Feb 02 11:10:22 compute-0 systemd[76004]: Startup finished in 102ms.
Feb 02 11:10:22 compute-0 systemd[1]: Started User Manager for UID 42477.
Feb 02 11:10:22 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Feb 02 11:10:22 compute-0 sshd-session[76000]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:22 compute-0 sshd-session[76017]: Accepted publickey for ceph-admin from 192.168.122.100 port 43846 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:22 compute-0 systemd-logind[793]: New session 23 of user ceph-admin.
Feb 02 11:10:22 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Feb 02 11:10:22 compute-0 sshd-session[76017]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:22 compute-0 sudo[76024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:22 compute-0 sudo[76024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:22 compute-0 sudo[76024]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:22 compute-0 sshd-session[76049]: Accepted publickey for ceph-admin from 192.168.122.100 port 43856 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:22 compute-0 systemd-logind[793]: New session 24 of user ceph-admin.
Feb 02 11:10:22 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Feb 02 11:10:22 compute-0 sshd-session[76049]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:22 compute-0 sudo[76053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Feb 02 11:10:22 compute-0 sudo[76053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:22 compute-0 sudo[76053]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:23 compute-0 sshd-session[76078]: Accepted publickey for ceph-admin from 192.168.122.100 port 43860 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:23 compute-0 systemd-logind[793]: New session 25 of user ceph-admin.
Feb 02 11:10:23 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Feb 02 11:10:23 compute-0 sshd-session[76078]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:23 compute-0 sudo[76082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Feb 02 11:10:23 compute-0 sudo[76082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:23 compute-0 sudo[76082]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:23 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Feb 02 11:10:23 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Feb 02 11:10:23 compute-0 sshd-session[76107]: Accepted publickey for ceph-admin from 192.168.122.100 port 43874 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:23 compute-0 ceph-mon[74676]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:23 compute-0 ceph-mon[74676]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:23 compute-0 systemd-logind[793]: New session 26 of user ceph-admin.
Feb 02 11:10:23 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Feb 02 11:10:23 compute-0 sshd-session[76107]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:23 compute-0 sudo[76111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:23 compute-0 sudo[76111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:23 compute-0 sudo[76111]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:23 compute-0 sshd-session[76136]: Accepted publickey for ceph-admin from 192.168.122.100 port 43886 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:23 compute-0 systemd-logind[793]: New session 27 of user ceph-admin.
Feb 02 11:10:23 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Feb 02 11:10:23 compute-0 sshd-session[76136]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:23 compute-0 sudo[76140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:23 compute-0 sudo[76140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:23 compute-0 sudo[76140]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:23 compute-0 ceph-mgr[74969]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 11:10:23 compute-0 sshd-session[76165]: Accepted publickey for ceph-admin from 192.168.122.100 port 43890 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:23 compute-0 systemd-logind[793]: New session 28 of user ceph-admin.
Feb 02 11:10:23 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Feb 02 11:10:23 compute-0 sshd-session[76165]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:24 compute-0 sudo[76169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Feb 02 11:10:24 compute-0 sudo[76169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:24 compute-0 sudo[76169]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:24 compute-0 sshd-session[76194]: Accepted publickey for ceph-admin from 192.168.122.100 port 43892 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:24 compute-0 systemd-logind[793]: New session 29 of user ceph-admin.
Feb 02 11:10:24 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Feb 02 11:10:24 compute-0 sshd-session[76194]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:24 compute-0 sudo[76198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:24 compute-0 ceph-mon[74676]: Deploying cephadm binary to compute-0
Feb 02 11:10:24 compute-0 sudo[76198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:24 compute-0 sudo[76198]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:24 compute-0 sshd-session[76223]: Accepted publickey for ceph-admin from 192.168.122.100 port 43900 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:24 compute-0 systemd-logind[793]: New session 30 of user ceph-admin.
Feb 02 11:10:24 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Feb 02 11:10:24 compute-0 sshd-session[76223]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:24 compute-0 sudo[76227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Feb 02 11:10:24 compute-0 sudo[76227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:24 compute-0 sudo[76227]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:24 compute-0 sshd-session[76252]: Accepted publickey for ceph-admin from 192.168.122.100 port 43908 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:24 compute-0 systemd-logind[793]: New session 31 of user ceph-admin.
Feb 02 11:10:24 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Feb 02 11:10:24 compute-0 sshd-session[76252]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053045 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:10:25 compute-0 ceph-mgr[74969]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 11:10:25 compute-0 sshd-session[76279]: Accepted publickey for ceph-admin from 192.168.122.100 port 43912 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:25 compute-0 systemd-logind[793]: New session 32 of user ceph-admin.
Feb 02 11:10:25 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Feb 02 11:10:25 compute-0 sshd-session[76279]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:26 compute-0 sudo[76283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Feb 02 11:10:26 compute-0 sudo[76283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:26 compute-0 sudo[76283]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:26 compute-0 sshd-session[76308]: Accepted publickey for ceph-admin from 192.168.122.100 port 43922 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:10:26 compute-0 systemd-logind[793]: New session 33 of user ceph-admin.
Feb 02 11:10:26 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Feb 02 11:10:26 compute-0 sshd-session[76308]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:10:26 compute-0 sudo[76312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Feb 02 11:10:26 compute-0 sudo[76312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:26 compute-0 sudo[76312]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 11:10:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:26 compute-0 ceph-mgr[74969]: [cephadm INFO root] Added host compute-0
Feb 02 11:10:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Added host compute-0
Feb 02 11:10:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 11:10:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:10:26 compute-0 practical_dubinsky[75974]: Added host 'compute-0' with addr '192.168.122.100'
Feb 02 11:10:26 compute-0 systemd[1]: libpod-aaa4d155b2430148b678f5341a947d3f62e5048a24a07261888778ac1187cee0.scope: Deactivated successfully.
Feb 02 11:10:26 compute-0 podman[75957]: 2026-02-02 11:10:26.663803821 +0000 UTC m=+5.107443199 container died aaa4d155b2430148b678f5341a947d3f62e5048a24a07261888778ac1187cee0 (image=quay.io/ceph/ceph:v19, name=practical_dubinsky, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6adaf3b5b8e4e9bd206effd5863b914c2c1a866e9740fe1f591046909a603244-merged.mount: Deactivated successfully.
Feb 02 11:10:26 compute-0 sudo[76357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:26 compute-0 sudo[76357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:26 compute-0 sudo[76357]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:26 compute-0 podman[75957]: 2026-02-02 11:10:26.699731907 +0000 UTC m=+5.143371275 container remove aaa4d155b2430148b678f5341a947d3f62e5048a24a07261888778ac1187cee0 (image=quay.io/ceph/ceph:v19, name=practical_dubinsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:26 compute-0 systemd[1]: libpod-conmon-aaa4d155b2430148b678f5341a947d3f62e5048a24a07261888778ac1187cee0.scope: Deactivated successfully.
Feb 02 11:10:26 compute-0 sudo[76395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Feb 02 11:10:26 compute-0 sudo[76395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:26 compute-0 podman[76402]: 2026-02-02 11:10:26.751489835 +0000 UTC m=+0.035477614 container create 8c662e202271a1be988ab7f0ff4326e6f3188e1fc9f4ba416aaad3fa1a8c172c (image=quay.io/ceph/ceph:v19, name=beautiful_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:10:26 compute-0 systemd[1]: Started libpod-conmon-8c662e202271a1be988ab7f0ff4326e6f3188e1fc9f4ba416aaad3fa1a8c172c.scope.
Feb 02 11:10:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e84bea4f717c9ee7a5c2f8dea9f7ae8e822da9891350e89bfd7610e55d617fcb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e84bea4f717c9ee7a5c2f8dea9f7ae8e822da9891350e89bfd7610e55d617fcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e84bea4f717c9ee7a5c2f8dea9f7ae8e822da9891350e89bfd7610e55d617fcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:26 compute-0 podman[76402]: 2026-02-02 11:10:26.818308623 +0000 UTC m=+0.102296442 container init 8c662e202271a1be988ab7f0ff4326e6f3188e1fc9f4ba416aaad3fa1a8c172c (image=quay.io/ceph/ceph:v19, name=beautiful_margulis, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Feb 02 11:10:26 compute-0 podman[76402]: 2026-02-02 11:10:26.825671333 +0000 UTC m=+0.109659132 container start 8c662e202271a1be988ab7f0ff4326e6f3188e1fc9f4ba416aaad3fa1a8c172c (image=quay.io/ceph/ceph:v19, name=beautiful_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:10:26 compute-0 podman[76402]: 2026-02-02 11:10:26.829108591 +0000 UTC m=+0.113096370 container attach 8c662e202271a1be988ab7f0ff4326e6f3188e1fc9f4ba416aaad3fa1a8c172c (image=quay.io/ceph/ceph:v19, name=beautiful_margulis, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 11:10:26 compute-0 podman[76402]: 2026-02-02 11:10:26.735828378 +0000 UTC m=+0.019816187 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:27 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:27 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service mon spec with placement count:5
Feb 02 11:10:27 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Feb 02 11:10:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 02 11:10:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:27 compute-0 beautiful_margulis[76436]: Scheduled mon update...
Feb 02 11:10:27 compute-0 systemd[1]: libpod-8c662e202271a1be988ab7f0ff4326e6f3188e1fc9f4ba416aaad3fa1a8c172c.scope: Deactivated successfully.
Feb 02 11:10:27 compute-0 podman[76402]: 2026-02-02 11:10:27.18346156 +0000 UTC m=+0.467449339 container died 8c662e202271a1be988ab7f0ff4326e6f3188e1fc9f4ba416aaad3fa1a8c172c (image=quay.io/ceph/ceph:v19, name=beautiful_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:27 compute-0 ceph-mgr[74969]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 11:10:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:10:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:28 compute-0 podman[76472]: 2026-02-02 11:10:28.907132791 +0000 UTC m=+1.965422345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e84bea4f717c9ee7a5c2f8dea9f7ae8e822da9891350e89bfd7610e55d617fcb-merged.mount: Deactivated successfully.
Feb 02 11:10:28 compute-0 podman[76402]: 2026-02-02 11:10:28.922836059 +0000 UTC m=+2.206823858 container remove 8c662e202271a1be988ab7f0ff4326e6f3188e1fc9f4ba416aaad3fa1a8c172c (image=quay.io/ceph/ceph:v19, name=beautiful_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:28 compute-0 podman[76509]: 2026-02-02 11:10:28.975541644 +0000 UTC m=+0.039353835 container create 39085c55fc3733cf5c2dd7308dbb946c1b69bd8add13503e5f2f02ff6a11cec3 (image=quay.io/ceph/ceph:v19, name=bold_hellman, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb 02 11:10:29 compute-0 podman[76526]: 2026-02-02 11:10:29.006758396 +0000 UTC m=+0.042384312 container create 8eb087bd68cc28c9beedc17662186aff426d2f9635f489acf92744e88cc148b3 (image=quay.io/ceph/ceph:v19, name=sweet_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb 02 11:10:29 compute-0 systemd[1]: Started libpod-conmon-39085c55fc3733cf5c2dd7308dbb946c1b69bd8add13503e5f2f02ff6a11cec3.scope.
Feb 02 11:10:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:29 compute-0 systemd[1]: Started libpod-conmon-8eb087bd68cc28c9beedc17662186aff426d2f9635f489acf92744e88cc148b3.scope.
Feb 02 11:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbe67fbc1a853ac7fa1ded217eb1e615719af922cb25d0cd128137c0cb23f04/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbe67fbc1a853ac7fa1ded217eb1e615719af922cb25d0cd128137c0cb23f04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbe67fbc1a853ac7fa1ded217eb1e615719af922cb25d0cd128137c0cb23f04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:29 compute-0 podman[76509]: 2026-02-02 11:10:29.048640772 +0000 UTC m=+0.112452983 container init 39085c55fc3733cf5c2dd7308dbb946c1b69bd8add13503e5f2f02ff6a11cec3 (image=quay.io/ceph/ceph:v19, name=bold_hellman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:29 compute-0 podman[76509]: 2026-02-02 11:10:29.055322393 +0000 UTC m=+0.119134584 container start 39085c55fc3733cf5c2dd7308dbb946c1b69bd8add13503e5f2f02ff6a11cec3 (image=quay.io/ceph/ceph:v19, name=bold_hellman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:29 compute-0 podman[76509]: 2026-02-02 11:10:28.957432497 +0000 UTC m=+0.021244688 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:29 compute-0 podman[76526]: 2026-02-02 11:10:29.063525807 +0000 UTC m=+0.099151733 container init 8eb087bd68cc28c9beedc17662186aff426d2f9635f489acf92744e88cc148b3 (image=quay.io/ceph/ceph:v19, name=sweet_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb 02 11:10:29 compute-0 podman[76526]: 2026-02-02 11:10:29.068218731 +0000 UTC m=+0.103844637 container start 8eb087bd68cc28c9beedc17662186aff426d2f9635f489acf92744e88cc148b3 (image=quay.io/ceph/ceph:v19, name=sweet_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:29 compute-0 podman[76509]: 2026-02-02 11:10:29.070530887 +0000 UTC m=+0.134343098 container attach 39085c55fc3733cf5c2dd7308dbb946c1b69bd8add13503e5f2f02ff6a11cec3 (image=quay.io/ceph/ceph:v19, name=bold_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:29 compute-0 podman[76526]: 2026-02-02 11:10:29.074948353 +0000 UTC m=+0.110574289 container attach 8eb087bd68cc28c9beedc17662186aff426d2f9635f489acf92744e88cc148b3 (image=quay.io/ceph/ceph:v19, name=sweet_babbage, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:29 compute-0 podman[76526]: 2026-02-02 11:10:28.990939244 +0000 UTC m=+0.026565170 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:29 compute-0 systemd[1]: libpod-conmon-8c662e202271a1be988ab7f0ff4326e6f3188e1fc9f4ba416aaad3fa1a8c172c.scope: Deactivated successfully.
Feb 02 11:10:29 compute-0 sweet_babbage[76550]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Feb 02 11:10:29 compute-0 systemd[1]: libpod-8eb087bd68cc28c9beedc17662186aff426d2f9635f489acf92744e88cc148b3.scope: Deactivated successfully.
Feb 02 11:10:29 compute-0 podman[76526]: 2026-02-02 11:10:29.171589673 +0000 UTC m=+0.207215599 container died 8eb087bd68cc28c9beedc17662186aff426d2f9635f489acf92744e88cc148b3 (image=quay.io/ceph/ceph:v19, name=sweet_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f745ebe7c6d535b5e6f5f966b3a5a471171b4b9913a6582e04b70438bc83cf9-merged.mount: Deactivated successfully.
Feb 02 11:10:29 compute-0 podman[76526]: 2026-02-02 11:10:29.207279532 +0000 UTC m=+0.242905438 container remove 8eb087bd68cc28c9beedc17662186aff426d2f9635f489acf92744e88cc148b3 (image=quay.io/ceph/ceph:v19, name=sweet_babbage, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 11:10:29 compute-0 systemd[1]: libpod-conmon-8eb087bd68cc28c9beedc17662186aff426d2f9635f489acf92744e88cc148b3.scope: Deactivated successfully.
Feb 02 11:10:29 compute-0 ceph-mon[74676]: Added host compute-0
Feb 02 11:10:29 compute-0 ceph-mon[74676]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:29 compute-0 ceph-mon[74676]: Saving service mon spec with placement count:5
Feb 02 11:10:29 compute-0 sudo[76395]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Feb 02 11:10:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:29 compute-0 sudo[76588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:29 compute-0 sudo[76588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:29 compute-0 sudo[76588]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:29 compute-0 sudo[76613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Feb 02 11:10:29 compute-0 sudo[76613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:29 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:29 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service mgr spec with placement count:2
Feb 02 11:10:29 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Feb 02 11:10:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 11:10:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:29 compute-0 bold_hellman[76544]: Scheduled mgr update...
Feb 02 11:10:29 compute-0 systemd[1]: libpod-39085c55fc3733cf5c2dd7308dbb946c1b69bd8add13503e5f2f02ff6a11cec3.scope: Deactivated successfully.
Feb 02 11:10:29 compute-0 podman[76509]: 2026-02-02 11:10:29.485796545 +0000 UTC m=+0.549608766 container died 39085c55fc3733cf5c2dd7308dbb946c1b69bd8add13503e5f2f02ff6a11cec3 (image=quay.io/ceph/ceph:v19, name=bold_hellman, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bbe67fbc1a853ac7fa1ded217eb1e615719af922cb25d0cd128137c0cb23f04-merged.mount: Deactivated successfully.
Feb 02 11:10:29 compute-0 podman[76509]: 2026-02-02 11:10:29.516188773 +0000 UTC m=+0.580000964 container remove 39085c55fc3733cf5c2dd7308dbb946c1b69bd8add13503e5f2f02ff6a11cec3 (image=quay.io/ceph/ceph:v19, name=bold_hellman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:29 compute-0 systemd[1]: libpod-conmon-39085c55fc3733cf5c2dd7308dbb946c1b69bd8add13503e5f2f02ff6a11cec3.scope: Deactivated successfully.
Feb 02 11:10:29 compute-0 podman[76653]: 2026-02-02 11:10:29.589218359 +0000 UTC m=+0.054531719 container create 71d4a9d27b5036636d534dd20f2c1b0fef689b89a3998cbbbbece212799999e4 (image=quay.io/ceph/ceph:v19, name=funny_benz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:10:29 compute-0 systemd[1]: Started libpod-conmon-71d4a9d27b5036636d534dd20f2c1b0fef689b89a3998cbbbbece212799999e4.scope.
Feb 02 11:10:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23013b8c315442df50b25cead8316769c2da0bcf1a4cc1423c6171f0c2d354e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23013b8c315442df50b25cead8316769c2da0bcf1a4cc1423c6171f0c2d354e4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23013b8c315442df50b25cead8316769c2da0bcf1a4cc1423c6171f0c2d354e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:29 compute-0 podman[76653]: 2026-02-02 11:10:29.650145198 +0000 UTC m=+0.115458558 container init 71d4a9d27b5036636d534dd20f2c1b0fef689b89a3998cbbbbece212799999e4 (image=quay.io/ceph/ceph:v19, name=funny_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:29 compute-0 podman[76653]: 2026-02-02 11:10:29.655227153 +0000 UTC m=+0.120540503 container start 71d4a9d27b5036636d534dd20f2c1b0fef689b89a3998cbbbbece212799999e4 (image=quay.io/ceph/ceph:v19, name=funny_benz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 11:10:29 compute-0 podman[76653]: 2026-02-02 11:10:29.658543858 +0000 UTC m=+0.123857278 container attach 71d4a9d27b5036636d534dd20f2c1b0fef689b89a3998cbbbbece212799999e4 (image=quay.io/ceph/ceph:v19, name=funny_benz, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:29 compute-0 podman[76653]: 2026-02-02 11:10:29.564692618 +0000 UTC m=+0.030005978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:29 compute-0 sudo[76613]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:10:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:29 compute-0 sudo[76694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:29 compute-0 sudo[76694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:29 compute-0 sudo[76694]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:29 compute-0 ceph-mgr[74969]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 11:10:29 compute-0 sudo[76736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:10:29 compute-0 sudo[76736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:30 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:30 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service crash spec with placement *
Feb 02 11:10:30 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Feb 02 11:10:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 02 11:10:30 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:30 compute-0 funny_benz[76685]: Scheduled crash update...
Feb 02 11:10:30 compute-0 systemd[1]: libpod-71d4a9d27b5036636d534dd20f2c1b0fef689b89a3998cbbbbece212799999e4.scope: Deactivated successfully.
Feb 02 11:10:30 compute-0 podman[76653]: 2026-02-02 11:10:30.047893426 +0000 UTC m=+0.513206776 container died 71d4a9d27b5036636d534dd20f2c1b0fef689b89a3998cbbbbece212799999e4 (image=quay.io/ceph/ceph:v19, name=funny_benz, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-23013b8c315442df50b25cead8316769c2da0bcf1a4cc1423c6171f0c2d354e4-merged.mount: Deactivated successfully.
Feb 02 11:10:30 compute-0 podman[76653]: 2026-02-02 11:10:30.088616919 +0000 UTC m=+0.553930279 container remove 71d4a9d27b5036636d534dd20f2c1b0fef689b89a3998cbbbbece212799999e4 (image=quay.io/ceph/ceph:v19, name=funny_benz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 11:10:30 compute-0 systemd[1]: libpod-conmon-71d4a9d27b5036636d534dd20f2c1b0fef689b89a3998cbbbbece212799999e4.scope: Deactivated successfully.
Feb 02 11:10:30 compute-0 podman[76813]: 2026-02-02 11:10:30.157803615 +0000 UTC m=+0.053081117 container create c74fc280698f9415a075e60e6cf0de83b0b0afc8652cc5f5f0773471ee1e2e7c (image=quay.io/ceph/ceph:v19, name=inspiring_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 11:10:30 compute-0 systemd[1]: Started libpod-conmon-c74fc280698f9415a075e60e6cf0de83b0b0afc8652cc5f5f0773471ee1e2e7c.scope.
Feb 02 11:10:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d794e93c532cbac0f0f2ada2c4a1b0ec4e04a80bc5b0f027ca0c2e944cd38cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d794e93c532cbac0f0f2ada2c4a1b0ec4e04a80bc5b0f027ca0c2e944cd38cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d794e93c532cbac0f0f2ada2c4a1b0ec4e04a80bc5b0f027ca0c2e944cd38cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:30 compute-0 podman[76813]: 2026-02-02 11:10:30.139510553 +0000 UTC m=+0.034788085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:30 compute-0 podman[76813]: 2026-02-02 11:10:30.238497549 +0000 UTC m=+0.133775101 container init c74fc280698f9415a075e60e6cf0de83b0b0afc8652cc5f5f0773471ee1e2e7c (image=quay.io/ceph/ceph:v19, name=inspiring_newton, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:30 compute-0 podman[76813]: 2026-02-02 11:10:30.244587223 +0000 UTC m=+0.139864725 container start c74fc280698f9415a075e60e6cf0de83b0b0afc8652cc5f5f0773471ee1e2e7c (image=quay.io/ceph/ceph:v19, name=inspiring_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:30 compute-0 podman[76813]: 2026-02-02 11:10:30.248223557 +0000 UTC m=+0.143501079 container attach c74fc280698f9415a075e60e6cf0de83b0b0afc8652cc5f5f0773471ee1e2e7c (image=quay.io/ceph/ceph:v19, name=inspiring_newton, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:10:30 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:30 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:30 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:30 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:30 compute-0 podman[76866]: 2026-02-02 11:10:30.314138299 +0000 UTC m=+0.052107979 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:30 compute-0 podman[76866]: 2026-02-02 11:10:30.399918019 +0000 UTC m=+0.137887649 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 11:10:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:10:30 compute-0 sudo[76736]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:10:30 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:30 compute-0 sudo[76936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Feb 02 11:10:30 compute-0 sudo[76936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:30 compute-0 sudo[76936]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:30 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/637305397' entity='client.admin' 
Feb 02 11:10:30 compute-0 systemd[1]: libpod-c74fc280698f9415a075e60e6cf0de83b0b0afc8652cc5f5f0773471ee1e2e7c.scope: Deactivated successfully.
Feb 02 11:10:30 compute-0 podman[76813]: 2026-02-02 11:10:30.613533969 +0000 UTC m=+0.508811471 container died c74fc280698f9415a075e60e6cf0de83b0b0afc8652cc5f5f0773471ee1e2e7c (image=quay.io/ceph/ceph:v19, name=inspiring_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d794e93c532cbac0f0f2ada2c4a1b0ec4e04a80bc5b0f027ca0c2e944cd38cf-merged.mount: Deactivated successfully.
Feb 02 11:10:30 compute-0 sudo[76962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:10:30 compute-0 podman[76813]: 2026-02-02 11:10:30.643549736 +0000 UTC m=+0.538827238 container remove c74fc280698f9415a075e60e6cf0de83b0b0afc8652cc5f5f0773471ee1e2e7c (image=quay.io/ceph/ceph:v19, name=inspiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:10:30 compute-0 sudo[76962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:30 compute-0 systemd[1]: libpod-conmon-c74fc280698f9415a075e60e6cf0de83b0b0afc8652cc5f5f0773471ee1e2e7c.scope: Deactivated successfully.
Feb 02 11:10:30 compute-0 podman[77000]: 2026-02-02 11:10:30.701892162 +0000 UTC m=+0.042434483 container create edc9075d637b497fbf070fa28ffe6a3600aa39c2a6b5c048a463409242d089fd (image=quay.io/ceph/ceph:v19, name=amazing_bassi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:30 compute-0 systemd[1]: Started libpod-conmon-edc9075d637b497fbf070fa28ffe6a3600aa39c2a6b5c048a463409242d089fd.scope.
Feb 02 11:10:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ad28f2c85a97beef81011ad28dea96040acd95529f45c9b4d87fa8c43c523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ad28f2c85a97beef81011ad28dea96040acd95529f45c9b4d87fa8c43c523/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ad28f2c85a97beef81011ad28dea96040acd95529f45c9b4d87fa8c43c523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:30 compute-0 podman[77000]: 2026-02-02 11:10:30.770823361 +0000 UTC m=+0.111365702 container init edc9075d637b497fbf070fa28ffe6a3600aa39c2a6b5c048a463409242d089fd (image=quay.io/ceph/ceph:v19, name=amazing_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:10:30 compute-0 podman[77000]: 2026-02-02 11:10:30.776015529 +0000 UTC m=+0.116557850 container start edc9075d637b497fbf070fa28ffe6a3600aa39c2a6b5c048a463409242d089fd (image=quay.io/ceph/ceph:v19, name=amazing_bassi, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:30 compute-0 podman[77000]: 2026-02-02 11:10:30.681917032 +0000 UTC m=+0.022459373 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:30 compute-0 podman[77000]: 2026-02-02 11:10:30.779750806 +0000 UTC m=+0.120293147 container attach edc9075d637b497fbf070fa28ffe6a3600aa39c2a6b5c048a463409242d089fd (image=quay.io/ceph/ceph:v19, name=amazing_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:30 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77048 (sysctl)
Feb 02 11:10:30 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb 02 11:10:30 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb 02 11:10:31 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Feb 02 11:10:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:31 compute-0 sudo[76962]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:31 compute-0 systemd[1]: libpod-edc9075d637b497fbf070fa28ffe6a3600aa39c2a6b5c048a463409242d089fd.scope: Deactivated successfully.
Feb 02 11:10:31 compute-0 podman[77074]: 2026-02-02 11:10:31.197757361 +0000 UTC m=+0.029211364 container died edc9075d637b497fbf070fa28ffe6a3600aa39c2a6b5c048a463409242d089fd (image=quay.io/ceph/ceph:v19, name=amazing_bassi, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:31 compute-0 sudo[77075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:31 compute-0 sudo[77075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:31 compute-0 sudo[77075]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d0ad28f2c85a97beef81011ad28dea96040acd95529f45c9b4d87fa8c43c523-merged.mount: Deactivated successfully.
Feb 02 11:10:31 compute-0 podman[77074]: 2026-02-02 11:10:31.235463378 +0000 UTC m=+0.066917371 container remove edc9075d637b497fbf070fa28ffe6a3600aa39c2a6b5c048a463409242d089fd (image=quay.io/ceph/ceph:v19, name=amazing_bassi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb 02 11:10:31 compute-0 systemd[1]: libpod-conmon-edc9075d637b497fbf070fa28ffe6a3600aa39c2a6b5c048a463409242d089fd.scope: Deactivated successfully.
Feb 02 11:10:31 compute-0 sudo[77114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Feb 02 11:10:31 compute-0 sudo[77114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:31 compute-0 ceph-mon[74676]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:31 compute-0 ceph-mon[74676]: Saving service mgr spec with placement count:2
Feb 02 11:10:31 compute-0 ceph-mon[74676]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:31 compute-0 ceph-mon[74676]: Saving service crash spec with placement *
Feb 02 11:10:31 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/637305397' entity='client.admin' 
Feb 02 11:10:31 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:31 compute-0 podman[77138]: 2026-02-02 11:10:31.295481232 +0000 UTC m=+0.037100901 container create f51c383dd69019f4bd4f3a2c10ed0797a43fea4112e1d354d8117c8aeb40ab45 (image=quay.io/ceph/ceph:v19, name=serene_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:31 compute-0 systemd[1]: Started libpod-conmon-f51c383dd69019f4bd4f3a2c10ed0797a43fea4112e1d354d8117c8aeb40ab45.scope.
Feb 02 11:10:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c441c2028bb52bb6bfbca1ee6f9df4843c82d21df892059f4c5d254ea8037b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c441c2028bb52bb6bfbca1ee6f9df4843c82d21df892059f4c5d254ea8037b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c441c2028bb52bb6bfbca1ee6f9df4843c82d21df892059f4c5d254ea8037b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:31 compute-0 podman[77138]: 2026-02-02 11:10:31.278285401 +0000 UTC m=+0.019905110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:31 compute-0 podman[77138]: 2026-02-02 11:10:31.377315459 +0000 UTC m=+0.118935158 container init f51c383dd69019f4bd4f3a2c10ed0797a43fea4112e1d354d8117c8aeb40ab45 (image=quay.io/ceph/ceph:v19, name=serene_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:10:31 compute-0 podman[77138]: 2026-02-02 11:10:31.383191957 +0000 UTC m=+0.124811636 container start f51c383dd69019f4bd4f3a2c10ed0797a43fea4112e1d354d8117c8aeb40ab45 (image=quay.io/ceph/ceph:v19, name=serene_joliot, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:31 compute-0 podman[77138]: 2026-02-02 11:10:31.386083809 +0000 UTC m=+0.127703518 container attach f51c383dd69019f4bd4f3a2c10ed0797a43fea4112e1d354d8117c8aeb40ab45 (image=quay.io/ceph/ceph:v19, name=serene_joliot, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:10:31 compute-0 sudo[77114]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:10:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:31 compute-0 sudo[77197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:31 compute-0 sudo[77197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:31 compute-0 sudo[77197]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:31 compute-0 sudo[77222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- inventory --format=json-pretty --filter-for-batch
Feb 02 11:10:31 compute-0 sudo[77222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:31 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 11:10:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:31 compute-0 ceph-mgr[74969]: [cephadm INFO root] Added label _admin to host compute-0
Feb 02 11:10:31 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Feb 02 11:10:31 compute-0 serene_joliot[77155]: Added label _admin to host compute-0
Feb 02 11:10:31 compute-0 ceph-mgr[74969]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 11:10:31 compute-0 systemd[1]: libpod-f51c383dd69019f4bd4f3a2c10ed0797a43fea4112e1d354d8117c8aeb40ab45.scope: Deactivated successfully.
Feb 02 11:10:31 compute-0 podman[77138]: 2026-02-02 11:10:31.763989515 +0000 UTC m=+0.505609204 container died f51c383dd69019f4bd4f3a2c10ed0797a43fea4112e1d354d8117c8aeb40ab45 (image=quay.io/ceph/ceph:v19, name=serene_joliot, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-12c441c2028bb52bb6bfbca1ee6f9df4843c82d21df892059f4c5d254ea8037b-merged.mount: Deactivated successfully.
Feb 02 11:10:31 compute-0 podman[77138]: 2026-02-02 11:10:31.800325229 +0000 UTC m=+0.541944908 container remove f51c383dd69019f4bd4f3a2c10ed0797a43fea4112e1d354d8117c8aeb40ab45 (image=quay.io/ceph/ceph:v19, name=serene_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:10:31 compute-0 systemd[1]: libpod-conmon-f51c383dd69019f4bd4f3a2c10ed0797a43fea4112e1d354d8117c8aeb40ab45.scope: Deactivated successfully.
Feb 02 11:10:31 compute-0 podman[77273]: 2026-02-02 11:10:31.872273914 +0000 UTC m=+0.053185582 container create 7d09fb9027a2cbbf4a1f74d70205fc69ab4266c5ea57b261a9029608d841f107 (image=quay.io/ceph/ceph:v19, name=ecstatic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:31 compute-0 systemd[1]: Started libpod-conmon-7d09fb9027a2cbbf4a1f74d70205fc69ab4266c5ea57b261a9029608d841f107.scope.
Feb 02 11:10:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7819fa5c041fcc15cfe879fde63292c84c2363d89281d1a160289df58e0d80/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7819fa5c041fcc15cfe879fde63292c84c2363d89281d1a160289df58e0d80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7819fa5c041fcc15cfe879fde63292c84c2363d89281d1a160289df58e0d80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:31 compute-0 podman[77273]: 2026-02-02 11:10:31.839101026 +0000 UTC m=+0.020012714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:31 compute-0 podman[77273]: 2026-02-02 11:10:31.945955471 +0000 UTC m=+0.126867159 container init 7d09fb9027a2cbbf4a1f74d70205fc69ab4266c5ea57b261a9029608d841f107 (image=quay.io/ceph/ceph:v19, name=ecstatic_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb 02 11:10:31 compute-0 podman[77273]: 2026-02-02 11:10:31.953045804 +0000 UTC m=+0.133957472 container start 7d09fb9027a2cbbf4a1f74d70205fc69ab4266c5ea57b261a9029608d841f107 (image=quay.io/ceph/ceph:v19, name=ecstatic_easley, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:10:31 compute-0 podman[77273]: 2026-02-02 11:10:31.956674714 +0000 UTC m=+0.137586402 container attach 7d09fb9027a2cbbf4a1f74d70205fc69ab4266c5ea57b261a9029608d841f107 (image=quay.io/ceph/ceph:v19, name=ecstatic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:32 compute-0 podman[77319]: 2026-02-02 11:10:32.034486675 +0000 UTC m=+0.051898302 container create 40a05b44fb72ee40d3bdac8c35c7c9d7ac4791267f9b56c6cba1de73a978a81e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:10:32 compute-0 systemd[1]: Started libpod-conmon-40a05b44fb72ee40d3bdac8c35c7c9d7ac4791267f9b56c6cba1de73a978a81e.scope.
Feb 02 11:10:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:32 compute-0 podman[77319]: 2026-02-02 11:10:32.090300955 +0000 UTC m=+0.107712622 container init 40a05b44fb72ee40d3bdac8c35c7c9d7ac4791267f9b56c6cba1de73a978a81e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_newton, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 11:10:32 compute-0 podman[77319]: 2026-02-02 11:10:32.095912694 +0000 UTC m=+0.113324331 container start 40a05b44fb72ee40d3bdac8c35c7c9d7ac4791267f9b56c6cba1de73a978a81e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_newton, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:32 compute-0 youthful_newton[77354]: 167 167
Feb 02 11:10:32 compute-0 systemd[1]: libpod-40a05b44fb72ee40d3bdac8c35c7c9d7ac4791267f9b56c6cba1de73a978a81e.scope: Deactivated successfully.
Feb 02 11:10:32 compute-0 conmon[77354]: conmon 40a05b44fb72ee40d3bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40a05b44fb72ee40d3bdac8c35c7c9d7ac4791267f9b56c6cba1de73a978a81e.scope/container/memory.events
Feb 02 11:10:32 compute-0 podman[77319]: 2026-02-02 11:10:32.100311656 +0000 UTC m=+0.117723293 container attach 40a05b44fb72ee40d3bdac8c35c7c9d7ac4791267f9b56c6cba1de73a978a81e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:10:32 compute-0 podman[77319]: 2026-02-02 11:10:32.101160572 +0000 UTC m=+0.118572209 container died 40a05b44fb72ee40d3bdac8c35c7c9d7ac4791267f9b56c6cba1de73a978a81e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb 02 11:10:32 compute-0 podman[77319]: 2026-02-02 11:10:32.012316528 +0000 UTC m=+0.029728245 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-16246157e08537606fc8bbac0846ef5839b9fd9f31271c513dd0a7c8a0537ae1-merged.mount: Deactivated successfully.
Feb 02 11:10:32 compute-0 podman[77319]: 2026-02-02 11:10:32.138903718 +0000 UTC m=+0.156315345 container remove 40a05b44fb72ee40d3bdac8c35c7c9d7ac4791267f9b56c6cba1de73a978a81e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_newton, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:32 compute-0 systemd[1]: libpod-conmon-40a05b44fb72ee40d3bdac8c35c7c9d7ac4791267f9b56c6cba1de73a978a81e.scope: Deactivated successfully.
Feb 02 11:10:32 compute-0 ceph-mon[74676]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:32 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:32 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Feb 02 11:10:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/759053118' entity='client.admin' 
Feb 02 11:10:32 compute-0 ecstatic_easley[77314]: set mgr/dashboard/cluster/status
Feb 02 11:10:32 compute-0 systemd[1]: libpod-7d09fb9027a2cbbf4a1f74d70205fc69ab4266c5ea57b261a9029608d841f107.scope: Deactivated successfully.
Feb 02 11:10:32 compute-0 conmon[77314]: conmon 7d09fb9027a2cbbf4a1f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d09fb9027a2cbbf4a1f74d70205fc69ab4266c5ea57b261a9029608d841f107.scope/container/memory.events
Feb 02 11:10:32 compute-0 podman[77273]: 2026-02-02 11:10:32.400733787 +0000 UTC m=+0.581645455 container died 7d09fb9027a2cbbf4a1f74d70205fc69ab4266c5ea57b261a9029608d841f107 (image=quay.io/ceph/ceph:v19, name=ecstatic_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a7819fa5c041fcc15cfe879fde63292c84c2363d89281d1a160289df58e0d80-merged.mount: Deactivated successfully.
Feb 02 11:10:32 compute-0 podman[77273]: 2026-02-02 11:10:32.431989527 +0000 UTC m=+0.612901195 container remove 7d09fb9027a2cbbf4a1f74d70205fc69ab4266c5ea57b261a9029608d841f107 (image=quay.io/ceph/ceph:v19, name=ecstatic_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:32 compute-0 systemd[1]: libpod-conmon-7d09fb9027a2cbbf4a1f74d70205fc69ab4266c5ea57b261a9029608d841f107.scope: Deactivated successfully.
Feb 02 11:10:32 compute-0 sudo[73638]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:32 compute-0 podman[77391]: 2026-02-02 11:10:32.569930939 +0000 UTC m=+0.042180151 container create 1b6055deb56e83679867e81a565abb753cf3858faf304757af01a700b8e0ea12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 11:10:32 compute-0 systemd[1]: Started libpod-conmon-1b6055deb56e83679867e81a565abb753cf3858faf304757af01a700b8e0ea12.scope.
Feb 02 11:10:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7244d9ee725baacaf07ffe9dea0667989bde31113aa67709888ad9f57e6d624d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7244d9ee725baacaf07ffe9dea0667989bde31113aa67709888ad9f57e6d624d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7244d9ee725baacaf07ffe9dea0667989bde31113aa67709888ad9f57e6d624d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7244d9ee725baacaf07ffe9dea0667989bde31113aa67709888ad9f57e6d624d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:32 compute-0 podman[77391]: 2026-02-02 11:10:32.639937785 +0000 UTC m=+0.112187017 container init 1b6055deb56e83679867e81a565abb753cf3858faf304757af01a700b8e0ea12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:10:32 compute-0 podman[77391]: 2026-02-02 11:10:32.550066341 +0000 UTC m=+0.022315573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:10:32 compute-0 podman[77391]: 2026-02-02 11:10:32.646862684 +0000 UTC m=+0.119111896 container start 1b6055deb56e83679867e81a565abb753cf3858faf304757af01a700b8e0ea12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 11:10:32 compute-0 podman[77391]: 2026-02-02 11:10:32.650433471 +0000 UTC m=+0.122682683 container attach 1b6055deb56e83679867e81a565abb753cf3858faf304757af01a700b8e0ea12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:32 compute-0 sudo[77436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfxpszpgbfchtiwazwmkznplkhymilue ; /usr/bin/python3'
Feb 02 11:10:32 compute-0 sudo[77436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:32 compute-0 python3[77440]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:10:33 compute-0 podman[77447]: 2026-02-02 11:10:33.010891518 +0000 UTC m=+0.037077856 container create 70ae1bdeb043bca235a7554c3525e229256ae3ef6a6b2520e3fb841497ae6f93 (image=quay.io/ceph/ceph:v19, name=elated_booth, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:33 compute-0 systemd[1]: Started libpod-conmon-70ae1bdeb043bca235a7554c3525e229256ae3ef6a6b2520e3fb841497ae6f93.scope.
Feb 02 11:10:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c22366c7c2badbdd3155e8af5d3cda2d5fa368ee3da7eccc5af07294aae5ddf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c22366c7c2badbdd3155e8af5d3cda2d5fa368ee3da7eccc5af07294aae5ddf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:33 compute-0 podman[77447]: 2026-02-02 11:10:32.99431715 +0000 UTC m=+0.020503518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:33 compute-0 podman[77447]: 2026-02-02 11:10:33.101065022 +0000 UTC m=+0.127251380 container init 70ae1bdeb043bca235a7554c3525e229256ae3ef6a6b2520e3fb841497ae6f93 (image=quay.io/ceph/ceph:v19, name=elated_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:10:33 compute-0 podman[77447]: 2026-02-02 11:10:33.107051342 +0000 UTC m=+0.133237680 container start 70ae1bdeb043bca235a7554c3525e229256ae3ef6a6b2520e3fb841497ae6f93 (image=quay.io/ceph/ceph:v19, name=elated_booth, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:10:33 compute-0 podman[77447]: 2026-02-02 11:10:33.110593719 +0000 UTC m=+0.136780077 container attach 70ae1bdeb043bca235a7554c3525e229256ae3ef6a6b2520e3fb841497ae6f93 (image=quay.io/ceph/ceph:v19, name=elated_booth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:33 compute-0 quirky_colden[77408]: [
Feb 02 11:10:33 compute-0 quirky_colden[77408]:     {
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         "available": false,
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         "being_replaced": false,
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         "ceph_device_lvm": false,
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         "lsm_data": {},
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         "lvs": [],
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         "path": "/dev/sr0",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         "rejected_reasons": [
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "Insufficient space (<5GB)",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "Has a FileSystem"
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         ],
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         "sys_api": {
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "actuators": null,
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "device_nodes": [
Feb 02 11:10:33 compute-0 quirky_colden[77408]:                 "sr0"
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             ],
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "devname": "sr0",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "human_readable_size": "482.00 KB",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "id_bus": "ata",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "model": "QEMU DVD-ROM",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "nr_requests": "2",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "parent": "/dev/sr0",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "partitions": {},
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "path": "/dev/sr0",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "removable": "1",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "rev": "2.5+",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "ro": "0",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "rotational": "1",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "sas_address": "",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "sas_device_handle": "",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "scheduler_mode": "mq-deadline",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "sectors": 0,
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "sectorsize": "2048",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "size": 493568.0,
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "support_discard": "2048",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "type": "disk",
Feb 02 11:10:33 compute-0 quirky_colden[77408]:             "vendor": "QEMU"
Feb 02 11:10:33 compute-0 quirky_colden[77408]:         }
Feb 02 11:10:33 compute-0 quirky_colden[77408]:     }
Feb 02 11:10:33 compute-0 quirky_colden[77408]: ]
Feb 02 11:10:33 compute-0 systemd[1]: libpod-1b6055deb56e83679867e81a565abb753cf3858faf304757af01a700b8e0ea12.scope: Deactivated successfully.
Feb 02 11:10:33 compute-0 podman[77391]: 2026-02-02 11:10:33.340717774 +0000 UTC m=+0.812966996 container died 1b6055deb56e83679867e81a565abb753cf3858faf304757af01a700b8e0ea12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7244d9ee725baacaf07ffe9dea0667989bde31113aa67709888ad9f57e6d624d-merged.mount: Deactivated successfully.
Feb 02 11:10:33 compute-0 ceph-mon[74676]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:33 compute-0 ceph-mon[74676]: Added label _admin to host compute-0
Feb 02 11:10:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/759053118' entity='client.admin' 
Feb 02 11:10:33 compute-0 podman[77391]: 2026-02-02 11:10:33.399192224 +0000 UTC m=+0.871441436 container remove 1b6055deb56e83679867e81a565abb753cf3858faf304757af01a700b8e0ea12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:33 compute-0 systemd[1]: libpod-conmon-1b6055deb56e83679867e81a565abb753cf3858faf304757af01a700b8e0ea12.scope: Deactivated successfully.
Feb 02 11:10:33 compute-0 sudo[77222]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Feb 02 11:10:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2270004931' entity='client.admin' 
Feb 02 11:10:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:10:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:10:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:10:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:33 compute-0 systemd[1]: libpod-70ae1bdeb043bca235a7554c3525e229256ae3ef6a6b2520e3fb841497ae6f93.scope: Deactivated successfully.
Feb 02 11:10:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:10:33 compute-0 conmon[77474]: conmon 70ae1bdeb043bca235a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70ae1bdeb043bca235a7554c3525e229256ae3ef6a6b2520e3fb841497ae6f93.scope/container/memory.events
Feb 02 11:10:33 compute-0 podman[77447]: 2026-02-02 11:10:33.475579332 +0000 UTC m=+0.501765680 container died 70ae1bdeb043bca235a7554c3525e229256ae3ef6a6b2520e3fb841497ae6f93 (image=quay.io/ceph/ceph:v19, name=elated_booth, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 11:10:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:10:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:10:33 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:10:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:10:33 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:10:33 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c22366c7c2badbdd3155e8af5d3cda2d5fa368ee3da7eccc5af07294aae5ddf-merged.mount: Deactivated successfully.
Feb 02 11:10:33 compute-0 podman[77447]: 2026-02-02 11:10:33.525168985 +0000 UTC m=+0.551355323 container remove 70ae1bdeb043bca235a7554c3525e229256ae3ef6a6b2520e3fb841497ae6f93 (image=quay.io/ceph/ceph:v19, name=elated_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:10:33 compute-0 systemd[1]: libpod-conmon-70ae1bdeb043bca235a7554c3525e229256ae3ef6a6b2520e3fb841497ae6f93.scope: Deactivated successfully.
Feb 02 11:10:33 compute-0 sudo[77436]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 sudo[78640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 11:10:33 compute-0 sudo[78640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:33 compute-0 sudo[78640]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 sudo[78670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph
Feb 02 11:10:33 compute-0 sudo[78670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:33 compute-0 sudo[78670]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 sudo[78695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:10:33 compute-0 sudo[78695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:33 compute-0 sudo[78695]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 sudo[78720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:33 compute-0 sudo[78720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:33 compute-0 sudo[78720]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 sudo[78745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:10:33 compute-0 sudo[78745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:33 compute-0 sudo[78745]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 ceph-mgr[74969]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 11:10:33 compute-0 sudo[78793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:10:33 compute-0 sudo[78793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:33 compute-0 sudo[78793]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 sudo[78824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:10:33 compute-0 sudo[78824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:33 compute-0 sudo[78824]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 sudo[78873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 02 11:10:33 compute-0 sudo[78873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:33 compute-0 sudo[78873]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:10:33 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:10:33 compute-0 sudo[78928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:10:33 compute-0 sudo[78928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:33 compute-0 sudo[78928]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:33 compute-0 sudo[78968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:10:33 compute-0 sudo[78968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:33 compute-0 sudo[78968]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[78993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:10:34 compute-0 sudo[78993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[78993]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:34 compute-0 sudo[79018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79018]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:10:34 compute-0 sudo[79047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79047]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:10:34 compute-0 sudo[79138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lblfeximysvnxgofavnzmjqyeqktxdsu ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770030633.8344872-37250-143382211789995/async_wrapper.py j206401704397 30 /home/zuul/.ansible/tmp/ansible-tmp-1770030633.8344872-37250-143382211789995/AnsiballZ_command.py _'
Feb 02 11:10:34 compute-0 sudo[79138]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:34 compute-0 sudo[79191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:10:34 compute-0 sudo[79191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79191]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:10:34 compute-0 sudo[79216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79216]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:10:34 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:10:34 compute-0 sudo[79241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 11:10:34 compute-0 sudo[79241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 ansible-async_wrapper.py[79190]: Invoked with j206401704397 30 /home/zuul/.ansible/tmp/ansible-tmp-1770030633.8344872-37250-143382211789995/AnsiballZ_command.py _
Feb 02 11:10:34 compute-0 sudo[79241]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 ansible-async_wrapper.py[79268]: Starting module and watcher
Feb 02 11:10:34 compute-0 ansible-async_wrapper.py[79268]: Start watching 79269 (30)
Feb 02 11:10:34 compute-0 ansible-async_wrapper.py[79269]: Start module (79269)
Feb 02 11:10:34 compute-0 ansible-async_wrapper.py[79190]: Return async_wrapper task started.
Feb 02 11:10:34 compute-0 sudo[79186]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph
Feb 02 11:10:34 compute-0 sudo[79270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79270]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:10:34 compute-0 sudo[79296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79296]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2270004931' entity='client.admin' 
Feb 02 11:10:34 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:34 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:34 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:34 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:34 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:10:34 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:34 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:10:34 compute-0 ceph-mon[74676]: Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:10:34 compute-0 ceph-mon[74676]: Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:10:34 compute-0 ceph-mon[74676]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:10:34 compute-0 sudo[79321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:34 compute-0 sudo[79321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79321]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 python3[79271]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:10:34 compute-0 sudo[79346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:10:34 compute-0 sudo[79346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79346]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 podman[79354]: 2026-02-02 11:10:34.546465199 +0000 UTC m=+0.049769639 container create 84ad7a4d1714c74bf46eebbcef31dfb89a462efa4f9e9171856a3233863fc92c (image=quay.io/ceph/ceph:v19, name=vibrant_hawking, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:34 compute-0 systemd[1]: Started libpod-conmon-84ad7a4d1714c74bf46eebbcef31dfb89a462efa4f9e9171856a3233863fc92c.scope.
Feb 02 11:10:34 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:34 compute-0 sudo[79406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e739f5a4e0a0bd5092a6972c21325437b4620f75562297f3becc1fc6a765389/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e739f5a4e0a0bd5092a6972c21325437b4620f75562297f3becc1fc6a765389/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:34 compute-0 sudo[79406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79406]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 podman[79354]: 2026-02-02 11:10:34.625538118 +0000 UTC m=+0.128842568 container init 84ad7a4d1714c74bf46eebbcef31dfb89a462efa4f9e9171856a3233863fc92c (image=quay.io/ceph/ceph:v19, name=vibrant_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:34 compute-0 podman[79354]: 2026-02-02 11:10:34.530263891 +0000 UTC m=+0.033568351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:34 compute-0 podman[79354]: 2026-02-02 11:10:34.630791197 +0000 UTC m=+0.134095627 container start 84ad7a4d1714c74bf46eebbcef31dfb89a462efa4f9e9171856a3233863fc92c (image=quay.io/ceph/ceph:v19, name=vibrant_hawking, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:34 compute-0 podman[79354]: 2026-02-02 11:10:34.634598631 +0000 UTC m=+0.137903081 container attach 84ad7a4d1714c74bf46eebbcef31dfb89a462efa4f9e9171856a3233863fc92c (image=quay.io/ceph/ceph:v19, name=vibrant_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:34 compute-0 sudo[79436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:10:34 compute-0 sudo[79436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79436]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 02 11:10:34 compute-0 sudo[79462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79462]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:10:34 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:10:34 compute-0 sudo[79487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:10:34 compute-0 sudo[79487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79487]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:10:34 compute-0 sudo[79531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79531]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:10:34 compute-0 sudo[79556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79556]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:34 compute-0 sudo[79581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79581]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 sudo[79606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:10:34 compute-0 sudo[79606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:34 compute-0 sudo[79606]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:34 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:10:34 compute-0 vibrant_hawking[79431]: 
Feb 02 11:10:34 compute-0 vibrant_hawking[79431]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 02 11:10:35 compute-0 systemd[1]: libpod-84ad7a4d1714c74bf46eebbcef31dfb89a462efa4f9e9171856a3233863fc92c.scope: Deactivated successfully.
Feb 02 11:10:35 compute-0 podman[79354]: 2026-02-02 11:10:35.005817911 +0000 UTC m=+0.509122371 container died 84ad7a4d1714c74bf46eebbcef31dfb89a462efa4f9e9171856a3233863fc92c (image=quay.io/ceph/ceph:v19, name=vibrant_hawking, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e739f5a4e0a0bd5092a6972c21325437b4620f75562297f3becc1fc6a765389-merged.mount: Deactivated successfully.
Feb 02 11:10:35 compute-0 sudo[79656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:10:35 compute-0 sudo[79656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:35 compute-0 podman[79354]: 2026-02-02 11:10:35.040020341 +0000 UTC m=+0.543324771 container remove 84ad7a4d1714c74bf46eebbcef31dfb89a462efa4f9e9171856a3233863fc92c (image=quay.io/ceph/ceph:v19, name=vibrant_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:10:35 compute-0 sudo[79656]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:35 compute-0 systemd[1]: libpod-conmon-84ad7a4d1714c74bf46eebbcef31dfb89a462efa4f9e9171856a3233863fc92c.scope: Deactivated successfully.
Feb 02 11:10:35 compute-0 ansible-async_wrapper.py[79269]: Module complete (79269)
Feb 02 11:10:35 compute-0 sudo[79693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:10:35 compute-0 sudo[79693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:35 compute-0 sudo[79693]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:35 compute-0 sudo[79718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:10:35 compute-0 sudo[79718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:35 compute-0 sudo[79718]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:10:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:10:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:10:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:35 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev a952ba9f-cfbf-4e04-83ce-e822bea57777 (Updating crash deployment (+1 -> 1))
Feb 02 11:10:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 02 11:10:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb 02 11:10:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 02 11:10:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:10:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:35 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Feb 02 11:10:35 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Feb 02 11:10:35 compute-0 sudo[79743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:35 compute-0 sudo[79743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:35 compute-0 sudo[79743]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:35 compute-0 sudo[79768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:35 compute-0 sudo[79768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:10:35 compute-0 podman[79855]: 2026-02-02 11:10:35.594880898 +0000 UTC m=+0.031442687 container create f78ced8a6c4af78ff9427af34a3aa6288de503dbbecca9f2036965357b95d8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_panini, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:35 compute-0 sudo[79889]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxlonrrdpucfyhturznvsudwfkjcdhqw ; /usr/bin/python3'
Feb 02 11:10:35 compute-0 systemd[1]: Started libpod-conmon-f78ced8a6c4af78ff9427af34a3aa6288de503dbbecca9f2036965357b95d8b8.scope.
Feb 02 11:10:35 compute-0 sudo[79889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:35 compute-0 podman[79855]: 2026-02-02 11:10:35.64212495 +0000 UTC m=+0.078686759 container init f78ced8a6c4af78ff9427af34a3aa6288de503dbbecca9f2036965357b95d8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_panini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:10:35 compute-0 podman[79855]: 2026-02-02 11:10:35.64811073 +0000 UTC m=+0.084672519 container start f78ced8a6c4af78ff9427af34a3aa6288de503dbbecca9f2036965357b95d8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_panini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:35 compute-0 competent_panini[79897]: 167 167
Feb 02 11:10:35 compute-0 systemd[1]: libpod-f78ced8a6c4af78ff9427af34a3aa6288de503dbbecca9f2036965357b95d8b8.scope: Deactivated successfully.
Feb 02 11:10:35 compute-0 podman[79855]: 2026-02-02 11:10:35.652042038 +0000 UTC m=+0.088603857 container attach f78ced8a6c4af78ff9427af34a3aa6288de503dbbecca9f2036965357b95d8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:35 compute-0 podman[79855]: 2026-02-02 11:10:35.652485752 +0000 UTC m=+0.089047551 container died f78ced8a6c4af78ff9427af34a3aa6288de503dbbecca9f2036965357b95d8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-96c9506026d654bfeacdf441744845712fe748d924d0d012ab21ad8cb1b9e42d-merged.mount: Deactivated successfully.
Feb 02 11:10:35 compute-0 podman[79855]: 2026-02-02 11:10:35.580448814 +0000 UTC m=+0.017010633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:10:35 compute-0 podman[79855]: 2026-02-02 11:10:35.685565537 +0000 UTC m=+0.122127326 container remove f78ced8a6c4af78ff9427af34a3aa6288de503dbbecca9f2036965357b95d8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_panini, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:10:35 compute-0 systemd[1]: libpod-conmon-f78ced8a6c4af78ff9427af34a3aa6288de503dbbecca9f2036965357b95d8b8.scope: Deactivated successfully.
Feb 02 11:10:35 compute-0 systemd[1]: Reloading.
Feb 02 11:10:35 compute-0 ceph-mgr[74969]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 11:10:35 compute-0 python3[79899]: ansible-ansible.legacy.async_status Invoked with jid=j206401704397.79190 mode=status _async_dir=/root/.ansible_async
Feb 02 11:10:35 compute-0 sudo[79889]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:35 compute-0 systemd-sysv-generator[79943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:10:35 compute-0 systemd-rc-local-generator[79940]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:10:35 compute-0 sudo[79996]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxhhonjqfxwvhrqhwsyxcbrlytpiwlbo ; /usr/bin/python3'
Feb 02 11:10:35 compute-0 sudo[79996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:35 compute-0 systemd[1]: Reloading.
Feb 02 11:10:36 compute-0 systemd-rc-local-generator[80031]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:10:36 compute-0 systemd-sysv-generator[80034]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:10:36 compute-0 python3[80000]: ansible-ansible.legacy.async_status Invoked with jid=j206401704397.79190 mode=cleanup _async_dir=/root/.ansible_async
Feb 02 11:10:36 compute-0 sudo[79996]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:36 compute-0 ceph-mon[74676]: Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:10:36 compute-0 ceph-mon[74676]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:10:36 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:36 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:36 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:36 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb 02 11:10:36 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 02 11:10:36 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:36 compute-0 ceph-mon[74676]: Deploying daemon crash.compute-0 on compute-0
Feb 02 11:10:36 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:10:36 compute-0 sudo[80121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyxqlodozzhmcjeguexffakoxeinxlvw ; /usr/bin/python3'
Feb 02 11:10:36 compute-0 sudo[80121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:36 compute-0 podman[80094]: 2026-02-02 11:10:36.341708222 +0000 UTC m=+0.033327063 container create f1fcd4cff8320df498aff4162263dabedde865ad4190f4d1a248b56f0b524a49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e53c784b4772fddf07bffb67b52a67bf111fe34e8e301a38c08a465932d365e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e53c784b4772fddf07bffb67b52a67bf111fe34e8e301a38c08a465932d365e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e53c784b4772fddf07bffb67b52a67bf111fe34e8e301a38c08a465932d365e/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e53c784b4772fddf07bffb67b52a67bf111fe34e8e301a38c08a465932d365e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:36 compute-0 podman[80094]: 2026-02-02 11:10:36.391848311 +0000 UTC m=+0.083467172 container init f1fcd4cff8320df498aff4162263dabedde865ad4190f4d1a248b56f0b524a49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:10:36 compute-0 podman[80094]: 2026-02-02 11:10:36.395311176 +0000 UTC m=+0.086930017 container start f1fcd4cff8320df498aff4162263dabedde865ad4190f4d1a248b56f0b524a49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb 02 11:10:36 compute-0 bash[80094]: f1fcd4cff8320df498aff4162263dabedde865ad4190f4d1a248b56f0b524a49
Feb 02 11:10:36 compute-0 podman[80094]: 2026-02-02 11:10:36.326470824 +0000 UTC m=+0.018089685 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:10:36 compute-0 systemd[1]: Started Ceph crash.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:10:36 compute-0 sudo[79768]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: INFO:ceph-crash:pinging cluster to exercise our key
Feb 02 11:10:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:10:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:10:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 02 11:10:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:36 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev a952ba9f-cfbf-4e04-83ce-e822bea57777 (Updating crash deployment (+1 -> 1))
Feb 02 11:10:36 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event a952ba9f-cfbf-4e04-83ce-e822bea57777 (Updating crash deployment (+1 -> 1)) in 1 seconds
Feb 02 11:10:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 02 11:10:36 compute-0 python3[80127]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 11:10:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 02 11:10:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 11:10:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:36 compute-0 sudo[80121]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:36 compute-0 sudo[80139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:10:36 compute-0 sudo[80139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:36 compute-0 sudo[80139]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: 2026-02-02T11:10:36.531+0000 7f31a7117640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb 02 11:10:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: 2026-02-02T11:10:36.531+0000 7f31a7117640 -1 AuthRegistry(0x7f31a006a1a0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb 02 11:10:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: 2026-02-02T11:10:36.532+0000 7f31a7117640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb 02 11:10:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: 2026-02-02T11:10:36.532+0000 7f31a7117640 -1 AuthRegistry(0x7f31a7115ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb 02 11:10:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: 2026-02-02T11:10:36.533+0000 7f31a4e8c640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Feb 02 11:10:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: 2026-02-02T11:10:36.533+0000 7f31a7117640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Feb 02 11:10:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: [errno 13] RADOS permission denied (error connecting to the cluster)
Feb 02 11:10:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Feb 02 11:10:36 compute-0 sudo[80165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:36 compute-0 sudo[80165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:36 compute-0 sudo[80165]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:36 compute-0 sudo[80199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:10:36 compute-0 sudo[80199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:36 compute-0 sudo[80247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eatpddtgbzedfvxuifzxigfpkndxzkds ; /usr/bin/python3'
Feb 02 11:10:36 compute-0 sudo[80247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:36 compute-0 python3[80249]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:10:36 compute-0 podman[80277]: 2026-02-02 11:10:36.918585613 +0000 UTC m=+0.037344515 container create 1e1ad630dd19f32b74b335768648dac6332141203c687d441d9556c19d389965 (image=quay.io/ceph/ceph:v19, name=trusting_maxwell, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb 02 11:10:36 compute-0 systemd[1]: Started libpod-conmon-1e1ad630dd19f32b74b335768648dac6332141203c687d441d9556c19d389965.scope.
Feb 02 11:10:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0633a9df7584045943ca12dd1d6434269dbed580b3d9ac67a1e13e80781ee172/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0633a9df7584045943ca12dd1d6434269dbed580b3d9ac67a1e13e80781ee172/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0633a9df7584045943ca12dd1d6434269dbed580b3d9ac67a1e13e80781ee172/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:36 compute-0 podman[80277]: 2026-02-02 11:10:36.899653343 +0000 UTC m=+0.018412265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:37 compute-0 podman[80277]: 2026-02-02 11:10:37.000085205 +0000 UTC m=+0.118844117 container init 1e1ad630dd19f32b74b335768648dac6332141203c687d441d9556c19d389965 (image=quay.io/ceph/ceph:v19, name=trusting_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:37 compute-0 podman[80277]: 2026-02-02 11:10:37.008327473 +0000 UTC m=+0.127086375 container start 1e1ad630dd19f32b74b335768648dac6332141203c687d441d9556c19d389965 (image=quay.io/ceph/ceph:v19, name=trusting_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:10:37 compute-0 podman[80277]: 2026-02-02 11:10:37.011187889 +0000 UTC m=+0.129946791 container attach 1e1ad630dd19f32b74b335768648dac6332141203c687d441d9556c19d389965 (image=quay.io/ceph/ceph:v19, name=trusting_maxwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:10:37 compute-0 podman[80341]: 2026-02-02 11:10:37.063490493 +0000 UTC m=+0.045240672 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:37 compute-0 podman[80341]: 2026-02-02 11:10:37.159149282 +0000 UTC m=+0.140899451 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:37 compute-0 sudo[80199]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:10:37 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:10:37 compute-0 trusting_maxwell[80320]: 
Feb 02 11:10:37 compute-0 trusting_maxwell[80320]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 systemd[1]: libpod-1e1ad630dd19f32b74b335768648dac6332141203c687d441d9556c19d389965.scope: Deactivated successfully.
Feb 02 11:10:37 compute-0 podman[80277]: 2026-02-02 11:10:37.38372485 +0000 UTC m=+0.502483752 container died 1e1ad630dd19f32b74b335768648dac6332141203c687d441d9556c19d389965 (image=quay.io/ceph/ceph:v19, name=trusting_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:10:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0633a9df7584045943ca12dd1d6434269dbed580b3d9ac67a1e13e80781ee172-merged.mount: Deactivated successfully.
Feb 02 11:10:37 compute-0 podman[80277]: 2026-02-02 11:10:37.417854957 +0000 UTC m=+0.536613859 container remove 1e1ad630dd19f32b74b335768648dac6332141203c687d441d9556c19d389965 (image=quay.io/ceph/ceph:v19, name=trusting_maxwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:10:37 compute-0 sudo[80428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:10:37 compute-0 sudo[80428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:37 compute-0 sudo[80428]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Feb 02 11:10:37 compute-0 systemd[1]: libpod-conmon-1e1ad630dd19f32b74b335768648dac6332141203c687d441d9556c19d389965.scope: Deactivated successfully.
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 sudo[80247]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Feb 02 11:10:37 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb 02 11:10:37 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb 02 11:10:37 compute-0 sudo[80464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:37 compute-0 sudo[80464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:37 compute-0 sudo[80464]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:37 compute-0 sudo[80489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:37 compute-0 sudo[80489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:37 compute-0 sudo[80537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdbfnurezvzgupbjwpcpriufsvopcwmf ; /usr/bin/python3'
Feb 02 11:10:37 compute-0 sudo[80537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:37 compute-0 ceph-mgr[74969]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Feb 02 11:10:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb 02 11:10:37 compute-0 podman[80557]: 2026-02-02 11:10:37.799566054 +0000 UTC m=+0.035804638 container create 24d1a11e5ee3fa2f0afe9dc5528f35b54edab5b523e0d4c74ca85419b6b429f6 (image=quay.io/ceph/ceph:v19, name=hungry_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:10:37 compute-0 systemd[1]: Started libpod-conmon-24d1a11e5ee3fa2f0afe9dc5528f35b54edab5b523e0d4c74ca85419b6b429f6.scope.
Feb 02 11:10:37 compute-0 python3[80539]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:10:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:37 compute-0 podman[80557]: 2026-02-02 11:10:37.87819411 +0000 UTC m=+0.114432704 container init 24d1a11e5ee3fa2f0afe9dc5528f35b54edab5b523e0d4c74ca85419b6b429f6 (image=quay.io/ceph/ceph:v19, name=hungry_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb 02 11:10:37 compute-0 podman[80557]: 2026-02-02 11:10:37.783438579 +0000 UTC m=+0.019677183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:37 compute-0 podman[80575]: 2026-02-02 11:10:37.882575322 +0000 UTC m=+0.036115498 container create 02a4b9fe44ad340f9dd209f3b3447d635c63a98c7591e4f650331098491a3b0f (image=quay.io/ceph/ceph:v19, name=practical_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb 02 11:10:37 compute-0 podman[80557]: 2026-02-02 11:10:37.884131179 +0000 UTC m=+0.120369753 container start 24d1a11e5ee3fa2f0afe9dc5528f35b54edab5b523e0d4c74ca85419b6b429f6 (image=quay.io/ceph/ceph:v19, name=hungry_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:37 compute-0 podman[80557]: 2026-02-02 11:10:37.887349786 +0000 UTC m=+0.123588390 container attach 24d1a11e5ee3fa2f0afe9dc5528f35b54edab5b523e0d4c74ca85419b6b429f6 (image=quay.io/ceph/ceph:v19, name=hungry_heyrovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:10:37 compute-0 hungry_heyrovsky[80573]: 167 167
Feb 02 11:10:37 compute-0 systemd[1]: libpod-24d1a11e5ee3fa2f0afe9dc5528f35b54edab5b523e0d4c74ca85419b6b429f6.scope: Deactivated successfully.
Feb 02 11:10:37 compute-0 podman[80557]: 2026-02-02 11:10:37.888574353 +0000 UTC m=+0.124812927 container died 24d1a11e5ee3fa2f0afe9dc5528f35b54edab5b523e0d4c74ca85419b6b429f6 (image=quay.io/ceph/ceph:v19, name=hungry_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:10:37 compute-0 systemd[1]: Started libpod-conmon-02a4b9fe44ad340f9dd209f3b3447d635c63a98c7591e4f650331098491a3b0f.scope.
Feb 02 11:10:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-991643ef3180fd2a8190175364beae5e75e5d8fea0346a1659e6f110a6f94c27-merged.mount: Deactivated successfully.
Feb 02 11:10:37 compute-0 podman[80557]: 2026-02-02 11:10:37.924405701 +0000 UTC m=+0.160644285 container remove 24d1a11e5ee3fa2f0afe9dc5528f35b54edab5b523e0d4c74ca85419b6b429f6 (image=quay.io/ceph/ceph:v19, name=hungry_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 11:10:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f473f77469b602bf4ecf3d3010dfd3138f662b0f7bd4678e4e6b972e09f66a5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f473f77469b602bf4ecf3d3010dfd3138f662b0f7bd4678e4e6b972e09f66a5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f473f77469b602bf4ecf3d3010dfd3138f662b0f7bd4678e4e6b972e09f66a5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:37 compute-0 systemd[1]: libpod-conmon-24d1a11e5ee3fa2f0afe9dc5528f35b54edab5b523e0d4c74ca85419b6b429f6.scope: Deactivated successfully.
Feb 02 11:10:37 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 1 completed events
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:10:37 compute-0 podman[80575]: 2026-02-02 11:10:37.955429975 +0000 UTC m=+0.108970151 container init 02a4b9fe44ad340f9dd209f3b3447d635c63a98c7591e4f650331098491a3b0f (image=quay.io/ceph/ceph:v19, name=practical_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 podman[80575]: 2026-02-02 11:10:37.960087405 +0000 UTC m=+0.113627591 container start 02a4b9fe44ad340f9dd209f3b3447d635c63a98c7591e4f650331098491a3b0f (image=quay.io/ceph/ceph:v19, name=practical_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:10:37 compute-0 podman[80575]: 2026-02-02 11:10:37.866013194 +0000 UTC m=+0.019553400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:37 compute-0 podman[80575]: 2026-02-02 11:10:37.963290331 +0000 UTC m=+0.116830527 container attach 02a4b9fe44ad340f9dd209f3b3447d635c63a98c7591e4f650331098491a3b0f (image=quay.io/ceph/ceph:v19, name=practical_booth, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:10:37 compute-0 sudo[80489]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:10:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:38 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.dhyzzj (unknown last config time)...
Feb 02 11:10:38 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.dhyzzj (unknown last config time)...
Feb 02 11:10:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.dhyzzj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dhyzzj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.dhyzzj on compute-0
Feb 02 11:10:38 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.dhyzzj on compute-0
Feb 02 11:10:38 compute-0 sudo[80610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:38 compute-0 sudo[80610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:38 compute-0 sudo[80610]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:38 compute-0 sudo[80637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:10:38 compute-0 sudo[80637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/658517354' entity='client.admin' 
Feb 02 11:10:38 compute-0 systemd[1]: libpod-02a4b9fe44ad340f9dd209f3b3447d635c63a98c7591e4f650331098491a3b0f.scope: Deactivated successfully.
Feb 02 11:10:38 compute-0 podman[80575]: 2026-02-02 11:10:38.321821121 +0000 UTC m=+0.475361297 container died 02a4b9fe44ad340f9dd209f3b3447d635c63a98c7591e4f650331098491a3b0f (image=quay.io/ceph/ceph:v19, name=practical_booth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f473f77469b602bf4ecf3d3010dfd3138f662b0f7bd4678e4e6b972e09f66a5-merged.mount: Deactivated successfully.
Feb 02 11:10:38 compute-0 podman[80575]: 2026-02-02 11:10:38.359610347 +0000 UTC m=+0.513150533 container remove 02a4b9fe44ad340f9dd209f3b3447d635c63a98c7591e4f650331098491a3b0f (image=quay.io/ceph/ceph:v19, name=practical_booth, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:38 compute-0 systemd[1]: libpod-conmon-02a4b9fe44ad340f9dd209f3b3447d635c63a98c7591e4f650331098491a3b0f.scope: Deactivated successfully.
Feb 02 11:10:38 compute-0 sudo[80537]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:38 compute-0 podman[80698]: 2026-02-02 11:10:38.393852147 +0000 UTC m=+0.043554482 container create 73c115998313794b530ffc6c7fdcc230f1bea667e198027dcac9d7bb43a8075a (image=quay.io/ceph/ceph:v19, name=dazzling_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:10:38 compute-0 systemd[1]: Started libpod-conmon-73c115998313794b530ffc6c7fdcc230f1bea667e198027dcac9d7bb43a8075a.scope.
Feb 02 11:10:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:38 compute-0 ceph-mon[74676]: Reconfiguring mon.compute-0 (unknown last config time)...
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mon[74676]: Reconfiguring daemon mon.compute-0 on compute-0
Feb 02 11:10:38 compute-0 ceph-mon[74676]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:38 compute-0 ceph-mon[74676]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:38 compute-0 ceph-mon[74676]: Reconfiguring mgr.compute-0.dhyzzj (unknown last config time)...
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dhyzzj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mon[74676]: Reconfiguring daemon mgr.compute-0.dhyzzj on compute-0
Feb 02 11:10:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/658517354' entity='client.admin' 
Feb 02 11:10:38 compute-0 podman[80698]: 2026-02-02 11:10:38.464820883 +0000 UTC m=+0.114523238 container init 73c115998313794b530ffc6c7fdcc230f1bea667e198027dcac9d7bb43a8075a (image=quay.io/ceph/ceph:v19, name=dazzling_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:38 compute-0 podman[80698]: 2026-02-02 11:10:38.470255926 +0000 UTC m=+0.119958261 container start 73c115998313794b530ffc6c7fdcc230f1bea667e198027dcac9d7bb43a8075a (image=quay.io/ceph/ceph:v19, name=dazzling_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:10:38 compute-0 podman[80698]: 2026-02-02 11:10:38.473794303 +0000 UTC m=+0.123496658 container attach 73c115998313794b530ffc6c7fdcc230f1bea667e198027dcac9d7bb43a8075a (image=quay.io/ceph/ceph:v19, name=dazzling_dirac, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:38 compute-0 dazzling_dirac[80726]: 167 167
Feb 02 11:10:38 compute-0 systemd[1]: libpod-73c115998313794b530ffc6c7fdcc230f1bea667e198027dcac9d7bb43a8075a.scope: Deactivated successfully.
Feb 02 11:10:38 compute-0 podman[80698]: 2026-02-02 11:10:38.379045672 +0000 UTC m=+0.028747997 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:38 compute-0 podman[80698]: 2026-02-02 11:10:38.474901356 +0000 UTC m=+0.124603691 container died 73c115998313794b530ffc6c7fdcc230f1bea667e198027dcac9d7bb43a8075a (image=quay.io/ceph/ceph:v19, name=dazzling_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:10:38 compute-0 sudo[80753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxbqzuymnmcuuidrximdxluwztfgytsj ; /usr/bin/python3'
Feb 02 11:10:38 compute-0 sudo[80753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec636bbd4c28017eee29ef19561b481cea5f02099ea7eb755500d3690d7c74b-merged.mount: Deactivated successfully.
Feb 02 11:10:38 compute-0 podman[80698]: 2026-02-02 11:10:38.506211868 +0000 UTC m=+0.155914203 container remove 73c115998313794b530ffc6c7fdcc230f1bea667e198027dcac9d7bb43a8075a (image=quay.io/ceph/ceph:v19, name=dazzling_dirac, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:10:38 compute-0 systemd[1]: libpod-conmon-73c115998313794b530ffc6c7fdcc230f1bea667e198027dcac9d7bb43a8075a.scope: Deactivated successfully.
Feb 02 11:10:38 compute-0 sudo[80637]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:10:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:10:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:38 compute-0 python3[80763]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:10:38 compute-0 sudo[80770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:10:38 compute-0 sudo[80770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:38 compute-0 sudo[80770]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:38 compute-0 podman[80791]: 2026-02-02 11:10:38.695400602 +0000 UTC m=+0.053535342 container create c2ccf312807ea95f70be30d7e19594ccfe1c2975c9f366fa0a24e3335acb8581 (image=quay.io/ceph/ceph:v19, name=kind_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:10:38 compute-0 systemd[1]: Started libpod-conmon-c2ccf312807ea95f70be30d7e19594ccfe1c2975c9f366fa0a24e3335acb8581.scope.
Feb 02 11:10:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923d8583e051a68f0f29ba37baf96ca165f5ef3e079747592b900de964b685e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923d8583e051a68f0f29ba37baf96ca165f5ef3e079747592b900de964b685e6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923d8583e051a68f0f29ba37baf96ca165f5ef3e079747592b900de964b685e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:38 compute-0 podman[80791]: 2026-02-02 11:10:38.766365867 +0000 UTC m=+0.124500697 container init c2ccf312807ea95f70be30d7e19594ccfe1c2975c9f366fa0a24e3335acb8581 (image=quay.io/ceph/ceph:v19, name=kind_banach, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb 02 11:10:38 compute-0 podman[80791]: 2026-02-02 11:10:38.677578785 +0000 UTC m=+0.035713555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:38 compute-0 podman[80791]: 2026-02-02 11:10:38.774857453 +0000 UTC m=+0.132992243 container start c2ccf312807ea95f70be30d7e19594ccfe1c2975c9f366fa0a24e3335acb8581 (image=quay.io/ceph/ceph:v19, name=kind_banach, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Feb 02 11:10:38 compute-0 podman[80791]: 2026-02-02 11:10:38.779238575 +0000 UTC m=+0.137373335 container attach c2ccf312807ea95f70be30d7e19594ccfe1c2975c9f366fa0a24e3335acb8581 (image=quay.io/ceph/ceph:v19, name=kind_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Feb 02 11:10:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831819050' entity='client.admin' 
Feb 02 11:10:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:10:39 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:10:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:10:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:10:39 compute-0 systemd[1]: libpod-c2ccf312807ea95f70be30d7e19594ccfe1c2975c9f366fa0a24e3335acb8581.scope: Deactivated successfully.
Feb 02 11:10:39 compute-0 podman[80791]: 2026-02-02 11:10:39.129306489 +0000 UTC m=+0.487441339 container died c2ccf312807ea95f70be30d7e19594ccfe1c2975c9f366fa0a24e3335acb8581 (image=quay.io/ceph/ceph:v19, name=kind_banach, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-923d8583e051a68f0f29ba37baf96ca165f5ef3e079747592b900de964b685e6-merged.mount: Deactivated successfully.
Feb 02 11:10:39 compute-0 podman[80791]: 2026-02-02 11:10:39.167489998 +0000 UTC m=+0.525624748 container remove c2ccf312807ea95f70be30d7e19594ccfe1c2975c9f366fa0a24e3335acb8581 (image=quay.io/ceph/ceph:v19, name=kind_banach, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:39 compute-0 systemd[1]: libpod-conmon-c2ccf312807ea95f70be30d7e19594ccfe1c2975c9f366fa0a24e3335acb8581.scope: Deactivated successfully.
Feb 02 11:10:39 compute-0 sudo[80836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:10:39 compute-0 sudo[80753]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:39 compute-0 sudo[80836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:39 compute-0 sudo[80836]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:39 compute-0 ansible-async_wrapper.py[79268]: Done in kid B.
Feb 02 11:10:39 compute-0 sudo[80894]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpualrmjvtdokciflmisyycdiuedsmwt ; /usr/bin/python3'
Feb 02 11:10:39 compute-0 sudo[80894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:39 compute-0 python3[80896]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:10:39 compute-0 podman[80897]: 2026-02-02 11:10:39.537647788 +0000 UTC m=+0.043476050 container create 590b4273e67fcb3a2dbec53dcd8e44966b6045cb24fcb0daa6d8ab3ebf262c5e (image=quay.io/ceph/ceph:v19, name=stupefied_ride, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 11:10:39 compute-0 systemd[1]: Started libpod-conmon-590b4273e67fcb3a2dbec53dcd8e44966b6045cb24fcb0daa6d8ab3ebf262c5e.scope.
Feb 02 11:10:39 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:39 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:39 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:39 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:10:39 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3831819050' entity='client.admin' 
Feb 02 11:10:39 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:39 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:10:39 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:39 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa4dcaa908e0545bc1a3b3317d86d5c240c00666b047b2138223ed0ef17f9a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa4dcaa908e0545bc1a3b3317d86d5c240c00666b047b2138223ed0ef17f9a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa4dcaa908e0545bc1a3b3317d86d5c240c00666b047b2138223ed0ef17f9a4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:39 compute-0 podman[80897]: 2026-02-02 11:10:39.612864311 +0000 UTC m=+0.118692573 container init 590b4273e67fcb3a2dbec53dcd8e44966b6045cb24fcb0daa6d8ab3ebf262c5e (image=quay.io/ceph/ceph:v19, name=stupefied_ride, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:10:39 compute-0 podman[80897]: 2026-02-02 11:10:39.518196422 +0000 UTC m=+0.024024704 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:39 compute-0 podman[80897]: 2026-02-02 11:10:39.619320635 +0000 UTC m=+0.125148897 container start 590b4273e67fcb3a2dbec53dcd8e44966b6045cb24fcb0daa6d8ab3ebf262c5e (image=quay.io/ceph/ceph:v19, name=stupefied_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:10:39 compute-0 podman[80897]: 2026-02-02 11:10:39.622487971 +0000 UTC m=+0.128316233 container attach 590b4273e67fcb3a2dbec53dcd8e44966b6045cb24fcb0daa6d8ab3ebf262c5e (image=quay.io/ceph/ceph:v19, name=stupefied_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb 02 11:10:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Feb 02 11:10:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1248547898' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Feb 02 11:10:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:10:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Feb 02 11:10:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:10:40 compute-0 ceph-mon[74676]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1248547898' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Feb 02 11:10:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1248547898' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb 02 11:10:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Feb 02 11:10:40 compute-0 stupefied_ride[80912]: set require_min_compat_client to mimic
Feb 02 11:10:40 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Feb 02 11:10:40 compute-0 systemd[1]: libpod-590b4273e67fcb3a2dbec53dcd8e44966b6045cb24fcb0daa6d8ab3ebf262c5e.scope: Deactivated successfully.
Feb 02 11:10:40 compute-0 conmon[80912]: conmon 590b4273e67fcb3a2dbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-590b4273e67fcb3a2dbec53dcd8e44966b6045cb24fcb0daa6d8ab3ebf262c5e.scope/container/memory.events
Feb 02 11:10:40 compute-0 podman[80937]: 2026-02-02 11:10:40.648672782 +0000 UTC m=+0.020605522 container died 590b4273e67fcb3a2dbec53dcd8e44966b6045cb24fcb0daa6d8ab3ebf262c5e (image=quay.io/ceph/ceph:v19, name=stupefied_ride, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 11:10:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fa4dcaa908e0545bc1a3b3317d86d5c240c00666b047b2138223ed0ef17f9a4-merged.mount: Deactivated successfully.
Feb 02 11:10:40 compute-0 podman[80937]: 2026-02-02 11:10:40.675674694 +0000 UTC m=+0.047607414 container remove 590b4273e67fcb3a2dbec53dcd8e44966b6045cb24fcb0daa6d8ab3ebf262c5e (image=quay.io/ceph/ceph:v19, name=stupefied_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:10:40 compute-0 systemd[1]: libpod-conmon-590b4273e67fcb3a2dbec53dcd8e44966b6045cb24fcb0daa6d8ab3ebf262c5e.scope: Deactivated successfully.
Feb 02 11:10:40 compute-0 sudo[80894]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:41 compute-0 sudo[80975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlrjkqrtqlvozldzizhqhjpowjhaojau ; /usr/bin/python3'
Feb 02 11:10:41 compute-0 sudo[80975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:41 compute-0 python3[80977]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:10:41 compute-0 podman[80978]: 2026-02-02 11:10:41.257913026 +0000 UTC m=+0.046070228 container create 1993d6718c53126cdb737b5f965ff0fd9ec3d65bd4827fc785bd40bd2655e619 (image=quay.io/ceph/ceph:v19, name=nice_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:41 compute-0 systemd[1]: Started libpod-conmon-1993d6718c53126cdb737b5f965ff0fd9ec3d65bd4827fc785bd40bd2655e619.scope.
Feb 02 11:10:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3b83b8f08277347a9864c30481b321af2482566ded44f8264db46aed57058/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3b83b8f08277347a9864c30481b321af2482566ded44f8264db46aed57058/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3b83b8f08277347a9864c30481b321af2482566ded44f8264db46aed57058/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:41 compute-0 podman[80978]: 2026-02-02 11:10:41.328874951 +0000 UTC m=+0.117032193 container init 1993d6718c53126cdb737b5f965ff0fd9ec3d65bd4827fc785bd40bd2655e619 (image=quay.io/ceph/ceph:v19, name=nice_feistel, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:10:41 compute-0 podman[80978]: 2026-02-02 11:10:41.235067048 +0000 UTC m=+0.023224300 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:41 compute-0 podman[80978]: 2026-02-02 11:10:41.334381597 +0000 UTC m=+0.122538799 container start 1993d6718c53126cdb737b5f965ff0fd9ec3d65bd4827fc785bd40bd2655e619 (image=quay.io/ceph/ceph:v19, name=nice_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Feb 02 11:10:41 compute-0 podman[80978]: 2026-02-02 11:10:41.338298855 +0000 UTC m=+0.126456077 container attach 1993d6718c53126cdb737b5f965ff0fd9ec3d65bd4827fc785bd40bd2655e619 (image=quay.io/ceph/ceph:v19, name=nice_feistel, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:10:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1248547898' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb 02 11:10:41 compute-0 ceph-mon[74676]: osdmap e3: 0 total, 0 up, 0 in
Feb 02 11:10:41 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:41 compute-0 sudo[81017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:10:41 compute-0 sudo[81017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:41 compute-0 sudo[81017]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:41 compute-0 sudo[81042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Feb 02 11:10:41 compute-0 sudo[81042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:42 compute-0 sudo[81042]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 11:10:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 11:10:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 11:10:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 11:10:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:42 compute-0 ceph-mgr[74969]: [cephadm INFO root] Added host compute-0
Feb 02 11:10:42 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Added host compute-0
Feb 02 11:10:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:10:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:10:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:10:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:10:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:42 compute-0 sudo[81087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:10:42 compute-0 sudo[81087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:10:42 compute-0 sudo[81087]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:43 compute-0 ceph-mon[74676]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:10:43 compute-0 ceph-mon[74676]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:43 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:43 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:43 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:43 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:43 compute-0 ceph-mon[74676]: Added host compute-0
Feb 02 11:10:43 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:10:43 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:10:43 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:43 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Feb 02 11:10:43 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Feb 02 11:10:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:44 compute-0 ceph-mon[74676]: Deploying cephadm binary to compute-1
Feb 02 11:10:45 compute-0 ceph-mon[74676]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:10:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 11:10:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:46 compute-0 ceph-mgr[74969]: [cephadm INFO root] Added host compute-1
Feb 02 11:10:46 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Added host compute-1
Feb 02 11:10:47 compute-0 ceph-mon[74676]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:47 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:10:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:10:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:10:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:10:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:10:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:10:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:10:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:10:47 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Feb 02 11:10:47 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Feb 02 11:10:48 compute-0 ceph-mon[74676]: Added host compute-1
Feb 02 11:10:48 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:48 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:10:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:49 compute-0 ceph-mon[74676]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:49 compute-0 ceph-mon[74676]: Deploying cephadm binary to compute-2
Feb 02 11:10:49 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:10:51 compute-0 ceph-mon[74676]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 11:10:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: [cephadm INFO root] Added host compute-2
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Added host compute-2
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Feb 02 11:10:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 02 11:10:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Feb 02 11:10:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 11:10:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Feb 02 11:10:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Feb 02 11:10:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:51 compute-0 nice_feistel[80993]: Added host 'compute-0' with addr '192.168.122.100'
Feb 02 11:10:51 compute-0 nice_feistel[80993]: Added host 'compute-1' with addr '192.168.122.101'
Feb 02 11:10:51 compute-0 nice_feistel[80993]: Added host 'compute-2' with addr '192.168.122.102'
Feb 02 11:10:51 compute-0 nice_feistel[80993]: Scheduled mon update...
Feb 02 11:10:51 compute-0 nice_feistel[80993]: Scheduled mgr update...
Feb 02 11:10:51 compute-0 nice_feistel[80993]: Scheduled osd.default_drive_group update...
Feb 02 11:10:51 compute-0 systemd[1]: libpod-1993d6718c53126cdb737b5f965ff0fd9ec3d65bd4827fc785bd40bd2655e619.scope: Deactivated successfully.
Feb 02 11:10:51 compute-0 podman[80978]: 2026-02-02 11:10:51.585218864 +0000 UTC m=+10.373376056 container died 1993d6718c53126cdb737b5f965ff0fd9ec3d65bd4827fc785bd40bd2655e619 (image=quay.io/ceph/ceph:v19, name=nice_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb 02 11:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-94d3b83b8f08277347a9864c30481b321af2482566ded44f8264db46aed57058-merged.mount: Deactivated successfully.
Feb 02 11:10:51 compute-0 podman[80978]: 2026-02-02 11:10:51.621503716 +0000 UTC m=+10.409660918 container remove 1993d6718c53126cdb737b5f965ff0fd9ec3d65bd4827fc785bd40bd2655e619 (image=quay.io/ceph/ceph:v19, name=nice_feistel, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb 02 11:10:51 compute-0 systemd[1]: libpod-conmon-1993d6718c53126cdb737b5f965ff0fd9ec3d65bd4827fc785bd40bd2655e619.scope: Deactivated successfully.
Feb 02 11:10:51 compute-0 sudo[80975]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:51 compute-0 sudo[81148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnnpyfsvkzryzztyajnpkqqbooofyxpw ; /usr/bin/python3'
Feb 02 11:10:51 compute-0 sudo[81148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:10:52 compute-0 python3[81150]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:10:52 compute-0 podman[81152]: 2026-02-02 11:10:52.050443714 +0000 UTC m=+0.037350325 container create 07b5a87b70d7df3df27e9096085749921516d48bfc0769b47dd49fc3de8180a0 (image=quay.io/ceph/ceph:v19, name=jolly_pasteur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:10:52 compute-0 systemd[1]: Started libpod-conmon-07b5a87b70d7df3df27e9096085749921516d48bfc0769b47dd49fc3de8180a0.scope.
Feb 02 11:10:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/582e1f9d2cd3a6390324651aed32d1ab2e3e79ef21004f217c8b71a72547c103/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/582e1f9d2cd3a6390324651aed32d1ab2e3e79ef21004f217c8b71a72547c103/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/582e1f9d2cd3a6390324651aed32d1ab2e3e79ef21004f217c8b71a72547c103/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:10:52 compute-0 podman[81152]: 2026-02-02 11:10:52.124170423 +0000 UTC m=+0.111077054 container init 07b5a87b70d7df3df27e9096085749921516d48bfc0769b47dd49fc3de8180a0 (image=quay.io/ceph/ceph:v19, name=jolly_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Feb 02 11:10:52 compute-0 podman[81152]: 2026-02-02 11:10:52.033417552 +0000 UTC m=+0.020324183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:10:52 compute-0 podman[81152]: 2026-02-02 11:10:52.128803922 +0000 UTC m=+0.115710533 container start 07b5a87b70d7df3df27e9096085749921516d48bfc0769b47dd49fc3de8180a0 (image=quay.io/ceph/ceph:v19, name=jolly_pasteur, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:52 compute-0 podman[81152]: 2026-02-02 11:10:52.132410391 +0000 UTC m=+0.119317032 container attach 07b5a87b70d7df3df27e9096085749921516d48bfc0769b47dd49fc3de8180a0 (image=quay.io/ceph/ceph:v19, name=jolly_pasteur, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 02 11:10:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/275774001' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb 02 11:10:52 compute-0 jolly_pasteur[81168]: 
Feb 02 11:10:52 compute-0 jolly_pasteur[81168]: {"fsid":"1d33f80b-d6ca-501c-bac7-184379b89279","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":52,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-02T11:09:58:610843+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-02T11:09:58.613481+0000","services":{}},"progress_events":{}}
Feb 02 11:10:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:52 compute-0 ceph-mon[74676]: Added host compute-2
Feb 02 11:10:52 compute-0 ceph-mon[74676]: Saving service mon spec with placement compute-0;compute-1;compute-2
Feb 02 11:10:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:52 compute-0 ceph-mon[74676]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Feb 02 11:10:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:52 compute-0 ceph-mon[74676]: Marking host: compute-0 for OSDSpec preview refresh.
Feb 02 11:10:52 compute-0 ceph-mon[74676]: Marking host: compute-1 for OSDSpec preview refresh.
Feb 02 11:10:52 compute-0 ceph-mon[74676]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Feb 02 11:10:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:10:52 compute-0 ceph-mon[74676]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/275774001' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb 02 11:10:52 compute-0 systemd[1]: libpod-07b5a87b70d7df3df27e9096085749921516d48bfc0769b47dd49fc3de8180a0.scope: Deactivated successfully.
Feb 02 11:10:52 compute-0 podman[81152]: 2026-02-02 11:10:52.555322458 +0000 UTC m=+0.542229089 container died 07b5a87b70d7df3df27e9096085749921516d48bfc0769b47dd49fc3de8180a0 (image=quay.io/ceph/ceph:v19, name=jolly_pasteur, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-582e1f9d2cd3a6390324651aed32d1ab2e3e79ef21004f217c8b71a72547c103-merged.mount: Deactivated successfully.
Feb 02 11:10:52 compute-0 podman[81152]: 2026-02-02 11:10:52.585498896 +0000 UTC m=+0.572405517 container remove 07b5a87b70d7df3df27e9096085749921516d48bfc0769b47dd49fc3de8180a0 (image=quay.io/ceph/ceph:v19, name=jolly_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:10:52 compute-0 systemd[1]: libpod-conmon-07b5a87b70d7df3df27e9096085749921516d48bfc0769b47dd49fc3de8180a0.scope: Deactivated successfully.
Feb 02 11:10:52 compute-0 sudo[81148]: pam_unix(sudo:session): session closed for user root
Feb 02 11:10:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:54 compute-0 ceph-mon[74676]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:10:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:56 compute-0 ceph-mon[74676]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:58 compute-0 ceph-mon[74676]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:10:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:11:00 compute-0 ceph-mon[74676]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:02 compute-0 ceph-mon[74676]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:04 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:04 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:04 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:04 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb 02 11:11:04 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:11:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:04 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:11:04 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:11:04 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:11:04 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:11:04 compute-0 ceph-mon[74676]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:04 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:04 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:04 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:04 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:04 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:11:04 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:04 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:11:05 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:11:05 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:11:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:11:05 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:11:05 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:11:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:05 compute-0 ceph-mon[74676]: Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:11:05 compute-0 ceph-mon[74676]: Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:11:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:11:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev bb0b310f-5e63-4809-8996-20753cd64bc4 (Updating crash deployment (+1 -> 2))
Feb 02 11:11:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 02 11:11:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:11:06.591+0000 7fef06d64640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: service_name: mon
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: placement:
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   hosts:
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   - compute-0
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   - compute-1
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   - compute-2
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:11:06.592+0000 7fef06d64640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: service_name: mgr
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: placement:
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   hosts:
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   - compute-0
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   - compute-1
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   - compute-2
Feb 02 11:11:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Feb 02 11:11:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 02 11:11:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:06 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Feb 02 11:11:06 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Feb 02 11:11:07 compute-0 ceph-mon[74676]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:11:07 compute-0 ceph-mon[74676]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:07 compute-0 ceph-mon[74676]: Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:11:07 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:07 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:07 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:07 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb 02 11:11:07 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 02 11:11:07 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:07 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Feb 02 11:11:08 compute-0 ceph-mon[74676]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Feb 02 11:11:08 compute-0 ceph-mon[74676]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:08 compute-0 ceph-mon[74676]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Feb 02 11:11:08 compute-0 ceph-mon[74676]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:08 compute-0 ceph-mon[74676]: Deploying daemon crash.compute-1 on compute-1
Feb 02 11:11:08 compute-0 ceph-mon[74676]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Feb 02 11:11:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 02 11:11:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:08 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev bb0b310f-5e63-4809-8996-20753cd64bc4 (Updating crash deployment (+1 -> 2))
Feb 02 11:11:08 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event bb0b310f-5e63-4809-8996-20753cd64bc4 (Updating crash deployment (+1 -> 2)) in 2 seconds
Feb 02 11:11:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 02 11:11:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:11:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:11:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:11:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:11:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:11:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:11:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:08 compute-0 sudo[81206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:11:08 compute-0 sudo[81206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:08 compute-0 sudo[81206]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:08 compute-0 sudo[81231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:11:08 compute-0 sudo[81231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:09 compute-0 podman[81297]: 2026-02-02 11:11:09.14667711 +0000 UTC m=+0.037631274 container create bc850926185f1ab26840486f76863337ced34986852ba13d393f3c038a45e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:11:09 compute-0 systemd[1]: Started libpod-conmon-bc850926185f1ab26840486f76863337ced34986852ba13d393f3c038a45e569.scope.
Feb 02 11:11:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:09 compute-0 podman[81297]: 2026-02-02 11:11:09.208899972 +0000 UTC m=+0.099854156 container init bc850926185f1ab26840486f76863337ced34986852ba13d393f3c038a45e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:11:09 compute-0 podman[81297]: 2026-02-02 11:11:09.214691326 +0000 UTC m=+0.105645490 container start bc850926185f1ab26840486f76863337ced34986852ba13d393f3c038a45e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_maxwell, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:11:09 compute-0 fervent_maxwell[81314]: 167 167
Feb 02 11:11:09 compute-0 podman[81297]: 2026-02-02 11:11:09.21780882 +0000 UTC m=+0.108762984 container attach bc850926185f1ab26840486f76863337ced34986852ba13d393f3c038a45e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:11:09 compute-0 systemd[1]: libpod-bc850926185f1ab26840486f76863337ced34986852ba13d393f3c038a45e569.scope: Deactivated successfully.
Feb 02 11:11:09 compute-0 podman[81297]: 2026-02-02 11:11:09.219245993 +0000 UTC m=+0.110200187 container died bc850926185f1ab26840486f76863337ced34986852ba13d393f3c038a45e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb 02 11:11:09 compute-0 podman[81297]: 2026-02-02 11:11:09.129574415 +0000 UTC m=+0.020528619 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-371a1afac0f835bf8f68b167311f4f6c7bddf7c37086ebfd602307e0fa397742-merged.mount: Deactivated successfully.
Feb 02 11:11:09 compute-0 podman[81297]: 2026-02-02 11:11:09.250933797 +0000 UTC m=+0.141888001 container remove bc850926185f1ab26840486f76863337ced34986852ba13d393f3c038a45e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:11:09 compute-0 systemd[1]: libpod-conmon-bc850926185f1ab26840486f76863337ced34986852ba13d393f3c038a45e569.scope: Deactivated successfully.
Feb 02 11:11:09 compute-0 podman[81337]: 2026-02-02 11:11:09.370398522 +0000 UTC m=+0.039720336 container create 3f688cdf01b8ee5c342ca1ca72f4d501eac0fb7388dce5d96e3fdec8ec8f0fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:11:09 compute-0 systemd[1]: Started libpod-conmon-3f688cdf01b8ee5c342ca1ca72f4d501eac0fb7388dce5d96e3fdec8ec8f0fc9.scope.
Feb 02 11:11:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c35f650cce8b3eb49b5225bfdb7863c5b2b53e401fcb690a5059fa82eb914/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c35f650cce8b3eb49b5225bfdb7863c5b2b53e401fcb690a5059fa82eb914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c35f650cce8b3eb49b5225bfdb7863c5b2b53e401fcb690a5059fa82eb914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c35f650cce8b3eb49b5225bfdb7863c5b2b53e401fcb690a5059fa82eb914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c35f650cce8b3eb49b5225bfdb7863c5b2b53e401fcb690a5059fa82eb914/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:09 compute-0 podman[81337]: 2026-02-02 11:11:09.441231234 +0000 UTC m=+0.110553098 container init 3f688cdf01b8ee5c342ca1ca72f4d501eac0fb7388dce5d96e3fdec8ec8f0fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 11:11:09 compute-0 podman[81337]: 2026-02-02 11:11:09.448461081 +0000 UTC m=+0.117782895 container start 3f688cdf01b8ee5c342ca1ca72f4d501eac0fb7388dce5d96e3fdec8ec8f0fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:11:09 compute-0 podman[81337]: 2026-02-02 11:11:09.352623637 +0000 UTC m=+0.021945471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:09 compute-0 podman[81337]: 2026-02-02 11:11:09.451635087 +0000 UTC m=+0.120956931 container attach 3f688cdf01b8ee5c342ca1ca72f4d501eac0fb7388dce5d96e3fdec8ec8f0fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_tharp, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:11:09 compute-0 ceph-mon[74676]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:09 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:09 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:09 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:09 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:09 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:11:09 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:11:09 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:09 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:11:09 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:09 compute-0 elated_tharp[81354]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:11:09 compute-0 elated_tharp[81354]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 11:11:09 compute-0 elated_tharp[81354]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 11:11:09 compute-0 elated_tharp[81354]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1ce0bc48-ed90-4057-9723-8baf8c87f572
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ea302c19-e2ae-4259-ac75-38769976b9be"} v 0)
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1561091607' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ea302c19-e2ae-4259-ac75-38769976b9be"}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1561091607' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ea302c19-e2ae-4259-ac75-38769976b9be"}]': finished
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1ce0bc48-ed90-4057-9723-8baf8c87f572"} v 0)
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3157808301' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ce0bc48-ed90-4057-9723-8baf8c87f572"}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3157808301' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1ce0bc48-ed90-4057-9723-8baf8c87f572"}]': finished
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 11:11:10 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 11:11:10 compute-0 elated_tharp[81354]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Feb 02 11:11:10 compute-0 lvm[81415]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:11:10 compute-0 lvm[81415]: VG ceph_vg0 finished
Feb 02 11:11:10 compute-0 elated_tharp[81354]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Feb 02 11:11:10 compute-0 elated_tharp[81354]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 02 11:11:10 compute-0 elated_tharp[81354]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:10 compute-0 elated_tharp[81354]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:11:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/980791800' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1561091607' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ea302c19-e2ae-4259-ac75-38769976b9be"}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1561091607' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ea302c19-e2ae-4259-ac75-38769976b9be"}]': finished
Feb 02 11:11:10 compute-0 ceph-mon[74676]: osdmap e4: 1 total, 0 up, 1 in
Feb 02 11:11:10 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3157808301' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ce0bc48-ed90-4057-9723-8baf8c87f572"}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3157808301' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1ce0bc48-ed90-4057-9723-8baf8c87f572"}]': finished
Feb 02 11:11:10 compute-0 ceph-mon[74676]: osdmap e5: 2 total, 0 up, 2 in
Feb 02 11:11:10 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/980791800' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Feb 02 11:11:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 02 11:11:10 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2312183182' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Feb 02 11:11:10 compute-0 elated_tharp[81354]:  stderr: got monmap epoch 1
Feb 02 11:11:10 compute-0 elated_tharp[81354]: --> Creating keyring file for osd.1
Feb 02 11:11:10 compute-0 elated_tharp[81354]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Feb 02 11:11:10 compute-0 elated_tharp[81354]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Feb 02 11:11:10 compute-0 elated_tharp[81354]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 1ce0bc48-ed90-4057-9723-8baf8c87f572 --setuser ceph --setgroup ceph
Feb 02 11:11:11 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb 02 11:11:11 compute-0 ceph-mon[74676]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:11 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2312183182' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Feb 02 11:11:11 compute-0 ceph-mon[74676]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb 02 11:11:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:12 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 2 completed events
Feb 02 11:11:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:11:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:13 compute-0 elated_tharp[81354]:  stderr: 2026-02-02T11:11:10.802+0000 7f4cc2f2d740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Feb 02 11:11:13 compute-0 elated_tharp[81354]:  stderr: 2026-02-02T11:11:11.071+0000 7f4cc2f2d740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Feb 02 11:11:13 compute-0 elated_tharp[81354]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Feb 02 11:11:13 compute-0 elated_tharp[81354]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 11:11:13 compute-0 elated_tharp[81354]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 02 11:11:13 compute-0 elated_tharp[81354]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:13 compute-0 elated_tharp[81354]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:13 compute-0 elated_tharp[81354]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 02 11:11:13 compute-0 elated_tharp[81354]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 11:11:13 compute-0 elated_tharp[81354]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 02 11:11:13 compute-0 elated_tharp[81354]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Feb 02 11:11:13 compute-0 systemd[1]: libpod-3f688cdf01b8ee5c342ca1ca72f4d501eac0fb7388dce5d96e3fdec8ec8f0fc9.scope: Deactivated successfully.
Feb 02 11:11:13 compute-0 systemd[1]: libpod-3f688cdf01b8ee5c342ca1ca72f4d501eac0fb7388dce5d96e3fdec8ec8f0fc9.scope: Consumed 1.785s CPU time.
Feb 02 11:11:13 compute-0 ceph-mon[74676]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:13 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:13 compute-0 podman[82337]: 2026-02-02 11:11:13.99898856 +0000 UTC m=+0.023987793 container died 3f688cdf01b8ee5c342ca1ca72f4d501eac0fb7388dce5d96e3fdec8ec8f0fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee4c35f650cce8b3eb49b5225bfdb7863c5b2b53e401fcb690a5059fa82eb914-merged.mount: Deactivated successfully.
Feb 02 11:11:14 compute-0 podman[82337]: 2026-02-02 11:11:14.030841169 +0000 UTC m=+0.055840402 container remove 3f688cdf01b8ee5c342ca1ca72f4d501eac0fb7388dce5d96e3fdec8ec8f0fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_tharp, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:11:14 compute-0 systemd[1]: libpod-conmon-3f688cdf01b8ee5c342ca1ca72f4d501eac0fb7388dce5d96e3fdec8ec8f0fc9.scope: Deactivated successfully.
Feb 02 11:11:14 compute-0 sudo[81231]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:14 compute-0 sudo[82352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:11:14 compute-0 sudo[82352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:14 compute-0 sudo[82352]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:14 compute-0 sudo[82377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:11:14 compute-0 sudo[82377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:14 compute-0 podman[82442]: 2026-02-02 11:11:14.494526091 +0000 UTC m=+0.036249032 container create b3b5d01b37fb4195edd4317259c874a5bbe22f18cd2ab38f395d6880a3161a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:11:14 compute-0 systemd[1]: Started libpod-conmon-b3b5d01b37fb4195edd4317259c874a5bbe22f18cd2ab38f395d6880a3161a2a.scope.
Feb 02 11:11:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:14 compute-0 podman[82442]: 2026-02-02 11:11:14.557037883 +0000 UTC m=+0.098760844 container init b3b5d01b37fb4195edd4317259c874a5bbe22f18cd2ab38f395d6880a3161a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jepsen, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:11:14 compute-0 podman[82442]: 2026-02-02 11:11:14.562802086 +0000 UTC m=+0.104525027 container start b3b5d01b37fb4195edd4317259c874a5bbe22f18cd2ab38f395d6880a3161a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jepsen, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:11:14 compute-0 funny_jepsen[82459]: 167 167
Feb 02 11:11:14 compute-0 systemd[1]: libpod-b3b5d01b37fb4195edd4317259c874a5bbe22f18cd2ab38f395d6880a3161a2a.scope: Deactivated successfully.
Feb 02 11:11:14 compute-0 podman[82442]: 2026-02-02 11:11:14.566424815 +0000 UTC m=+0.108147786 container attach b3b5d01b37fb4195edd4317259c874a5bbe22f18cd2ab38f395d6880a3161a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 11:11:14 compute-0 podman[82442]: 2026-02-02 11:11:14.567165217 +0000 UTC m=+0.108888168 container died b3b5d01b37fb4195edd4317259c874a5bbe22f18cd2ab38f395d6880a3161a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jepsen, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:11:14 compute-0 podman[82442]: 2026-02-02 11:11:14.479612652 +0000 UTC m=+0.021335603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fc944a1e1305070b0542fe47ac8eb754428e649fd6fa3714427133ee1cdbcfd-merged.mount: Deactivated successfully.
Feb 02 11:11:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:14 compute-0 podman[82442]: 2026-02-02 11:11:14.607025037 +0000 UTC m=+0.148747978 container remove b3b5d01b37fb4195edd4317259c874a5bbe22f18cd2ab38f395d6880a3161a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jepsen, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:11:14 compute-0 systemd[1]: libpod-conmon-b3b5d01b37fb4195edd4317259c874a5bbe22f18cd2ab38f395d6880a3161a2a.scope: Deactivated successfully.
Feb 02 11:11:14 compute-0 podman[82483]: 2026-02-02 11:11:14.732349788 +0000 UTC m=+0.036952763 container create 5ef180005effdb1d90fc2d0304471517ed18968f80ddb8cf320887876c1a01e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb 02 11:11:14 compute-0 systemd[1]: Started libpod-conmon-5ef180005effdb1d90fc2d0304471517ed18968f80ddb8cf320887876c1a01e4.scope.
Feb 02 11:11:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca38b8a65922da85cddac3ec9680982fe47e0e81766455a94eb06bff72e37af2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca38b8a65922da85cddac3ec9680982fe47e0e81766455a94eb06bff72e37af2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca38b8a65922da85cddac3ec9680982fe47e0e81766455a94eb06bff72e37af2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca38b8a65922da85cddac3ec9680982fe47e0e81766455a94eb06bff72e37af2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:14 compute-0 podman[82483]: 2026-02-02 11:11:14.804885881 +0000 UTC m=+0.109488896 container init 5ef180005effdb1d90fc2d0304471517ed18968f80ddb8cf320887876c1a01e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:11:14 compute-0 podman[82483]: 2026-02-02 11:11:14.811307474 +0000 UTC m=+0.115910459 container start 5ef180005effdb1d90fc2d0304471517ed18968f80ddb8cf320887876c1a01e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hertz, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Feb 02 11:11:14 compute-0 podman[82483]: 2026-02-02 11:11:14.715891143 +0000 UTC m=+0.020494148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:14 compute-0 podman[82483]: 2026-02-02 11:11:14.814291764 +0000 UTC m=+0.118894769 container attach 5ef180005effdb1d90fc2d0304471517ed18968f80ddb8cf320887876c1a01e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hertz, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb 02 11:11:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb 02 11:11:14 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Feb 02 11:11:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:14 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Feb 02 11:11:14 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Feb 02 11:11:14 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Feb 02 11:11:14 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]: {
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:     "1": [
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:         {
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "devices": [
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "/dev/loop3"
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             ],
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "lv_name": "ceph_lv0",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "lv_size": "21470642176",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "name": "ceph_lv0",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "tags": {
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.cluster_name": "ceph",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.crush_device_class": "",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.encrypted": "0",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.osd_id": "1",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.type": "block",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.vdo": "0",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:                 "ceph.with_tpm": "0"
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             },
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "type": "block",
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:             "vg_name": "ceph_vg0"
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:         }
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]:     ]
Feb 02 11:11:15 compute-0 suspicious_hertz[82500]: }
Feb 02 11:11:15 compute-0 systemd[1]: libpod-5ef180005effdb1d90fc2d0304471517ed18968f80ddb8cf320887876c1a01e4.scope: Deactivated successfully.
Feb 02 11:11:15 compute-0 podman[82483]: 2026-02-02 11:11:15.088893628 +0000 UTC m=+0.393496613 container died 5ef180005effdb1d90fc2d0304471517ed18968f80ddb8cf320887876c1a01e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hertz, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca38b8a65922da85cddac3ec9680982fe47e0e81766455a94eb06bff72e37af2-merged.mount: Deactivated successfully.
Feb 02 11:11:15 compute-0 podman[82483]: 2026-02-02 11:11:15.122431217 +0000 UTC m=+0.427034202 container remove 5ef180005effdb1d90fc2d0304471517ed18968f80ddb8cf320887876c1a01e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:11:15 compute-0 systemd[1]: libpod-conmon-5ef180005effdb1d90fc2d0304471517ed18968f80ddb8cf320887876c1a01e4.scope: Deactivated successfully.
Feb 02 11:11:15 compute-0 sudo[82377]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb 02 11:11:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Feb 02 11:11:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:15 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:15 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Feb 02 11:11:15 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Feb 02 11:11:15 compute-0 sudo[82523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:11:15 compute-0 sudo[82523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:15 compute-0 sudo[82523]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:15 compute-0 sudo[82548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:11:15 compute-0 sudo[82548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:11:15 compute-0 podman[82612]: 2026-02-02 11:11:15.583788871 +0000 UTC m=+0.036694106 container create 148349e39a6a5987cadf3f6c298f844472e6327027ca07deb1fa565465790e64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:11:15 compute-0 systemd[1]: Started libpod-conmon-148349e39a6a5987cadf3f6c298f844472e6327027ca07deb1fa565465790e64.scope.
Feb 02 11:11:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:15 compute-0 podman[82612]: 2026-02-02 11:11:15.646174968 +0000 UTC m=+0.099080233 container init 148349e39a6a5987cadf3f6c298f844472e6327027ca07deb1fa565465790e64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:11:15 compute-0 podman[82612]: 2026-02-02 11:11:15.651189739 +0000 UTC m=+0.104094974 container start 148349e39a6a5987cadf3f6c298f844472e6327027ca07deb1fa565465790e64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb 02 11:11:15 compute-0 podman[82612]: 2026-02-02 11:11:15.654230461 +0000 UTC m=+0.107135696 container attach 148349e39a6a5987cadf3f6c298f844472e6327027ca07deb1fa565465790e64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hodgkin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:11:15 compute-0 systemd[1]: libpod-148349e39a6a5987cadf3f6c298f844472e6327027ca07deb1fa565465790e64.scope: Deactivated successfully.
Feb 02 11:11:15 compute-0 sweet_hodgkin[82628]: 167 167
Feb 02 11:11:15 compute-0 conmon[82628]: conmon 148349e39a6a5987cadf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-148349e39a6a5987cadf3f6c298f844472e6327027ca07deb1fa565465790e64.scope/container/memory.events
Feb 02 11:11:15 compute-0 podman[82612]: 2026-02-02 11:11:15.656098267 +0000 UTC m=+0.109003502 container died 148349e39a6a5987cadf3f6c298f844472e6327027ca07deb1fa565465790e64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:11:15 compute-0 podman[82612]: 2026-02-02 11:11:15.567977985 +0000 UTC m=+0.020883220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-db42cf4b8828b6fe714780f2242ef10cf10d317012bd49c55faf7f3c762fedba-merged.mount: Deactivated successfully.
Feb 02 11:11:15 compute-0 podman[82612]: 2026-02-02 11:11:15.684254554 +0000 UTC m=+0.137159789 container remove 148349e39a6a5987cadf3f6c298f844472e6327027ca07deb1fa565465790e64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hodgkin, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:11:15 compute-0 systemd[1]: libpod-conmon-148349e39a6a5987cadf3f6c298f844472e6327027ca07deb1fa565465790e64.scope: Deactivated successfully.
Feb 02 11:11:15 compute-0 podman[82659]: 2026-02-02 11:11:15.857053984 +0000 UTC m=+0.033841849 container create 25a25f71879738b06afa31c4984ef0fd45deef072d336e171ec3c2255d1d8036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate-test, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:11:15 compute-0 systemd[1]: Started libpod-conmon-25a25f71879738b06afa31c4984ef0fd45deef072d336e171ec3c2255d1d8036.scope.
Feb 02 11:11:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985920f8753f1e9f61f5869b3ffac8d64132e215615296dcdf6566c49dda3722/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985920f8753f1e9f61f5869b3ffac8d64132e215615296dcdf6566c49dda3722/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985920f8753f1e9f61f5869b3ffac8d64132e215615296dcdf6566c49dda3722/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985920f8753f1e9f61f5869b3ffac8d64132e215615296dcdf6566c49dda3722/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985920f8753f1e9f61f5869b3ffac8d64132e215615296dcdf6566c49dda3722/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:15 compute-0 podman[82659]: 2026-02-02 11:11:15.926728831 +0000 UTC m=+0.103516716 container init 25a25f71879738b06afa31c4984ef0fd45deef072d336e171ec3c2255d1d8036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb 02 11:11:15 compute-0 podman[82659]: 2026-02-02 11:11:15.935962109 +0000 UTC m=+0.112749974 container start 25a25f71879738b06afa31c4984ef0fd45deef072d336e171ec3c2255d1d8036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate-test, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:11:15 compute-0 podman[82659]: 2026-02-02 11:11:15.938604158 +0000 UTC m=+0.115392023 container attach 25a25f71879738b06afa31c4984ef0fd45deef072d336e171ec3c2255d1d8036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:11:15 compute-0 podman[82659]: 2026-02-02 11:11:15.843733993 +0000 UTC m=+0.020521868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:15 compute-0 ceph-mon[74676]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:15 compute-0 ceph-mon[74676]: Deploying daemon osd.0 on compute-1
Feb 02 11:11:15 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Feb 02 11:11:15 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:15 compute-0 ceph-mon[74676]: Deploying daemon osd.1 on compute-0
Feb 02 11:11:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate-test[82676]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 02 11:11:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate-test[82676]:                             [--no-systemd] [--no-tmpfs]
Feb 02 11:11:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate-test[82676]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 02 11:11:16 compute-0 systemd[1]: libpod-25a25f71879738b06afa31c4984ef0fd45deef072d336e171ec3c2255d1d8036.scope: Deactivated successfully.
Feb 02 11:11:16 compute-0 podman[82659]: 2026-02-02 11:11:16.162935759 +0000 UTC m=+0.339723624 container died 25a25f71879738b06afa31c4984ef0fd45deef072d336e171ec3c2255d1d8036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate-test, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-985920f8753f1e9f61f5869b3ffac8d64132e215615296dcdf6566c49dda3722-merged.mount: Deactivated successfully.
Feb 02 11:11:16 compute-0 podman[82659]: 2026-02-02 11:11:16.20317523 +0000 UTC m=+0.379963095 container remove 25a25f71879738b06afa31c4984ef0fd45deef072d336e171ec3c2255d1d8036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:11:16 compute-0 systemd[1]: libpod-conmon-25a25f71879738b06afa31c4984ef0fd45deef072d336e171ec3c2255d1d8036.scope: Deactivated successfully.
Feb 02 11:11:16 compute-0 systemd[1]: Reloading.
Feb 02 11:11:16 compute-0 systemd-sysv-generator[82740]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:11:16 compute-0 systemd-rc-local-generator[82737]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:11:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:16 compute-0 systemd[1]: Reloading.
Feb 02 11:11:16 compute-0 systemd-sysv-generator[82781]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:11:16 compute-0 systemd-rc-local-generator[82776]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:11:16 compute-0 systemd[1]: Starting Ceph osd.1 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:11:17 compute-0 podman[82836]: 2026-02-02 11:11:17.050594332 +0000 UTC m=+0.034546071 container create 334e5718459fdcbfd8b9097b89bd03ecf918cdc025ea33e9643457b83b400f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 11:11:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2072f65f17b353dbb2cecd481cc9f6bca4c7323706b7c2ab6b43e2d4eff2a5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2072f65f17b353dbb2cecd481cc9f6bca4c7323706b7c2ab6b43e2d4eff2a5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2072f65f17b353dbb2cecd481cc9f6bca4c7323706b7c2ab6b43e2d4eff2a5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2072f65f17b353dbb2cecd481cc9f6bca4c7323706b7c2ab6b43e2d4eff2a5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2072f65f17b353dbb2cecd481cc9f6bca4c7323706b7c2ab6b43e2d4eff2a5a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:17 compute-0 podman[82836]: 2026-02-02 11:11:17.112258067 +0000 UTC m=+0.096209806 container init 334e5718459fdcbfd8b9097b89bd03ecf918cdc025ea33e9643457b83b400f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:11:17 compute-0 podman[82836]: 2026-02-02 11:11:17.116538006 +0000 UTC m=+0.100489745 container start 334e5718459fdcbfd8b9097b89bd03ecf918cdc025ea33e9643457b83b400f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:11:17 compute-0 podman[82836]: 2026-02-02 11:11:17.119927888 +0000 UTC m=+0.103879627 container attach 334e5718459fdcbfd8b9097b89bd03ecf918cdc025ea33e9643457b83b400f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:11:17 compute-0 podman[82836]: 2026-02-02 11:11:17.035304051 +0000 UTC m=+0.019255820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 11:11:17 compute-0 bash[82836]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 11:11:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 11:11:17 compute-0 bash[82836]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 11:11:17 compute-0 lvm[82933]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:11:17 compute-0 lvm[82933]: VG ceph_vg0 finished
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:11:17
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [balancer INFO root] No pools available
Feb 02 11:11:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 02 11:11:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 11:11:17 compute-0 bash[82836]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 02 11:11:17 compute-0 bash[82836]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 11:11:17 compute-0 lvm[82937]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:11:17 compute-0 lvm[82937]: VG ceph_vg0 finished
Feb 02 11:11:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 11:11:17 compute-0 bash[82836]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:11:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:11:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 11:11:17 compute-0 bash[82836]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 11:11:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 02 11:11:17 compute-0 bash[82836]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 02 11:11:17 compute-0 ceph-mon[74676]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:18 compute-0 bash[82836]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:18 compute-0 bash[82836]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 02 11:11:18 compute-0 bash[82836]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 02 11:11:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 11:11:18 compute-0 bash[82836]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 11:11:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate[82852]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 02 11:11:18 compute-0 bash[82836]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 02 11:11:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:18 compute-0 systemd[1]: libpod-334e5718459fdcbfd8b9097b89bd03ecf918cdc025ea33e9643457b83b400f3d.scope: Deactivated successfully.
Feb 02 11:11:18 compute-0 systemd[1]: libpod-334e5718459fdcbfd8b9097b89bd03ecf918cdc025ea33e9643457b83b400f3d.scope: Consumed 1.078s CPU time.
Feb 02 11:11:18 compute-0 podman[82836]: 2026-02-02 11:11:18.234268101 +0000 UTC m=+1.218219840 container died 334e5718459fdcbfd8b9097b89bd03ecf918cdc025ea33e9643457b83b400f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:11:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2072f65f17b353dbb2cecd481cc9f6bca4c7323706b7c2ab6b43e2d4eff2a5a-merged.mount: Deactivated successfully.
Feb 02 11:11:18 compute-0 podman[82836]: 2026-02-02 11:11:18.271693867 +0000 UTC m=+1.255645606 container remove 334e5718459fdcbfd8b9097b89bd03ecf918cdc025ea33e9643457b83b400f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:11:18 compute-0 podman[83103]: 2026-02-02 11:11:18.432178937 +0000 UTC m=+0.031738726 container create 0576ae0b033fcba43dfe39796c16b81ba9c5a3f233b55a9c64c1aab22dd5597c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf86db969ef9694d6505e23ab51b7ab18952edfa1ec70276f648bc15e41eea2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf86db969ef9694d6505e23ab51b7ab18952edfa1ec70276f648bc15e41eea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf86db969ef9694d6505e23ab51b7ab18952edfa1ec70276f648bc15e41eea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf86db969ef9694d6505e23ab51b7ab18952edfa1ec70276f648bc15e41eea2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf86db969ef9694d6505e23ab51b7ab18952edfa1ec70276f648bc15e41eea2/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:18 compute-0 podman[83103]: 2026-02-02 11:11:18.485980366 +0000 UTC m=+0.085540165 container init 0576ae0b033fcba43dfe39796c16b81ba9c5a3f233b55a9c64c1aab22dd5597c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb 02 11:11:18 compute-0 podman[83103]: 2026-02-02 11:11:18.493667627 +0000 UTC m=+0.093227396 container start 0576ae0b033fcba43dfe39796c16b81ba9c5a3f233b55a9c64c1aab22dd5597c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:11:18 compute-0 bash[83103]: 0576ae0b033fcba43dfe39796c16b81ba9c5a3f233b55a9c64c1aab22dd5597c
Feb 02 11:11:18 compute-0 podman[83103]: 2026-02-02 11:11:18.417630179 +0000 UTC m=+0.017189968 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:18 compute-0 systemd[1]: Started Ceph osd.1 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:11:18 compute-0 ceph-osd[83123]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 11:11:18 compute-0 ceph-osd[83123]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Feb 02 11:11:18 compute-0 ceph-osd[83123]: pidfile_write: ignore empty --pid-file
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:18 compute-0 sudo[82548]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:11:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:11:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:18 compute-0 sudo[83135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:11:18 compute-0 sudo[83135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:18 compute-0 sudo[83135]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:18 compute-0 sudo[83160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:11:18 compute-0 sudo[83160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:18 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:19 compute-0 podman[83232]: 2026-02-02 11:11:19.058434173 +0000 UTC m=+0.045929983 container create 82601e0a3c898b813ecc17b4d3ac0ddba2c22a73d1e4f7f8e043ddd38de126ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:11:19 compute-0 systemd[1]: Started libpod-conmon-82601e0a3c898b813ecc17b4d3ac0ddba2c22a73d1e4f7f8e043ddd38de126ed.scope.
Feb 02 11:11:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:19 compute-0 podman[83232]: 2026-02-02 11:11:19.120744608 +0000 UTC m=+0.108240438 container init 82601e0a3c898b813ecc17b4d3ac0ddba2c22a73d1e4f7f8e043ddd38de126ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:11:19 compute-0 podman[83232]: 2026-02-02 11:11:19.127622655 +0000 UTC m=+0.115118475 container start 82601e0a3c898b813ecc17b4d3ac0ddba2c22a73d1e4f7f8e043ddd38de126ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:11:19 compute-0 objective_bouman[83248]: 167 167
Feb 02 11:11:19 compute-0 systemd[1]: libpod-82601e0a3c898b813ecc17b4d3ac0ddba2c22a73d1e4f7f8e043ddd38de126ed.scope: Deactivated successfully.
Feb 02 11:11:19 compute-0 podman[83232]: 2026-02-02 11:11:19.038646267 +0000 UTC m=+0.026142097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:19 compute-0 podman[83232]: 2026-02-02 11:11:19.134816341 +0000 UTC m=+0.122312181 container attach 82601e0a3c898b813ecc17b4d3ac0ddba2c22a73d1e4f7f8e043ddd38de126ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:11:19 compute-0 podman[83232]: 2026-02-02 11:11:19.135549533 +0000 UTC m=+0.123045343 container died 82601e0a3c898b813ecc17b4d3ac0ddba2c22a73d1e4f7f8e043ddd38de126ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:11:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-95301fe2a759ccddf2ec879cba0217e3421766f295b00f37c27e7129ba7d5e7b-merged.mount: Deactivated successfully.
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:19 compute-0 podman[83232]: 2026-02-02 11:11:19.168425863 +0000 UTC m=+0.155921673 container remove 82601e0a3c898b813ecc17b4d3ac0ddba2c22a73d1e4f7f8e043ddd38de126ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:11:19 compute-0 systemd[1]: libpod-conmon-82601e0a3c898b813ecc17b4d3ac0ddba2c22a73d1e4f7f8e043ddd38de126ed.scope: Deactivated successfully.
Feb 02 11:11:19 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:19 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:19 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:19 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:19 compute-0 podman[83273]: 2026-02-02 11:11:19.290483796 +0000 UTC m=+0.038573252 container create 906ad6f99ed49f5fdab6b05fed3eb8c796dc402e1b44cb57ace1c9a6c9533b42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:11:19 compute-0 systemd[1]: Started libpod-conmon-906ad6f99ed49f5fdab6b05fed3eb8c796dc402e1b44cb57ace1c9a6c9533b42.scope.
Feb 02 11:11:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b721bc63b8c72731cb0e5c2666ce7bce297dc3f8caf16062152ac2f773a3cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b721bc63b8c72731cb0e5c2666ce7bce297dc3f8caf16062152ac2f773a3cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b721bc63b8c72731cb0e5c2666ce7bce297dc3f8caf16062152ac2f773a3cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b721bc63b8c72731cb0e5c2666ce7bce297dc3f8caf16062152ac2f773a3cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:19 compute-0 podman[83273]: 2026-02-02 11:11:19.275174715 +0000 UTC m=+0.023264191 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:19 compute-0 podman[83273]: 2026-02-02 11:11:19.376412212 +0000 UTC m=+0.124501688 container init 906ad6f99ed49f5fdab6b05fed3eb8c796dc402e1b44cb57ace1c9a6c9533b42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jepsen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:11:19 compute-0 podman[83273]: 2026-02-02 11:11:19.38168353 +0000 UTC m=+0.129772986 container start 906ad6f99ed49f5fdab6b05fed3eb8c796dc402e1b44cb57ace1c9a6c9533b42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:11:19 compute-0 podman[83273]: 2026-02-02 11:11:19.38699433 +0000 UTC m=+0.135083796 container attach 906ad6f99ed49f5fdab6b05fed3eb8c796dc402e1b44cb57ace1c9a6c9533b42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jepsen, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba93ad9800 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:19 compute-0 ceph-osd[83123]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Feb 02 11:11:19 compute-0 ceph-osd[83123]: load: jerasure load: lrc 
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 11:11:19 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:19 compute-0 lvm[83371]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:11:19 compute-0 lvm[83371]: VG ceph_vg0 finished
Feb 02 11:11:20 compute-0 gifted_jepsen[83289]: {}
Feb 02 11:11:20 compute-0 systemd[1]: libpod-906ad6f99ed49f5fdab6b05fed3eb8c796dc402e1b44cb57ace1c9a6c9533b42.scope: Deactivated successfully.
Feb 02 11:11:20 compute-0 systemd[1]: libpod-906ad6f99ed49f5fdab6b05fed3eb8c796dc402e1b44cb57ace1c9a6c9533b42.scope: Consumed 1.050s CPU time.
Feb 02 11:11:20 compute-0 podman[83273]: 2026-02-02 11:11:20.102883294 +0000 UTC m=+0.850972760 container died 906ad6f99ed49f5fdab6b05fed3eb8c796dc402e1b44cb57ace1c9a6c9533b42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jepsen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-66b721bc63b8c72731cb0e5c2666ce7bce297dc3f8caf16062152ac2f773a3cc-merged.mount: Deactivated successfully.
Feb 02 11:11:20 compute-0 podman[83273]: 2026-02-02 11:11:20.15261489 +0000 UTC m=+0.900704346 container remove 906ad6f99ed49f5fdab6b05fed3eb8c796dc402e1b44cb57ace1c9a6c9533b42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:11:20 compute-0 systemd[1]: libpod-conmon-906ad6f99ed49f5fdab6b05fed3eb8c796dc402e1b44cb57ace1c9a6c9533b42.scope: Deactivated successfully.
Feb 02 11:11:20 compute-0 sudo[83160]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:11:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:11:20 compute-0 ceph-mon[74676]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:20 compute-0 sudo[83392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:11:20 compute-0 sudo[83392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:20 compute-0 sudo[83392]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:20 compute-0 sudo[83417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:11:20 compute-0 sudo[83417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:20 compute-0 sudo[83417]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:11:20 compute-0 sudo[83442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:11:20 compute-0 sudo[83442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:20 compute-0 ceph-osd[83123]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 02 11:11:20 compute-0 ceph-osd[83123]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Feb 02 11:11:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/121046935,v1:192.168.122.101:6801/121046935]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Feb 02 11:11:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:20 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:11:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:20 compute-0 podman[83550]: 2026-02-02 11:11:20.982579436 +0000 UTC m=+0.056507091 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:21 compute-0 podman[83550]: 2026-02-02 11:11:21.092298928 +0000 UTC m=+0.166226573 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:11:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:21 compute-0 ceph-mon[74676]: from='osd.0 [v2:192.168.122.101:6800/121046935,v1:192.168.122.101:6801/121046935]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Feb 02 11:11:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/121046935,v1:192.168.122.101:6801/121046935]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/121046935,v1:192.168.122.101:6801/121046935]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:21 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 11:11:21 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497f000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497f000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497f000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497f000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount shared_bdev_used = 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: RocksDB version: 7.9.2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Git sha 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Compile date 2025-07-17 03:12:14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: DB SUMMARY
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: DB Session ID:  V5BWOFBQVROJXLLCRRAG
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: CURRENT file:  CURRENT
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                         Options.error_if_exists: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.create_if_missing: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                                     Options.env: 0x55ba9494fdc0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                                Options.info_log: 0x55ba949537a0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                              Options.statistics: (nil)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.use_fsync: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                              Options.db_log_dir: 
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                                 Options.wal_dir: db.wal
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.write_buffer_manager: 0x55ba94a4aa00
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.unordered_write: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.row_cache: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                              Options.wal_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.two_write_queues: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.wal_compression: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.atomic_flush: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.max_background_jobs: 4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.max_background_compactions: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.max_subcompactions: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.max_open_files: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Compression algorithms supported:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kZSTD supported: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kXpressCompression supported: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kBZip2Compression supported: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kLZ4Compression supported: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kZlibCompression supported: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kSnappyCompression supported: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 sudo[83442]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 939c09c6-9d64-4942-8540-4923a5d8a821
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030681354616, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030681355092, "job": 1, "event": "recovery_finished"}
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: freelist init
Feb 02 11:11:21 compute-0 ceph-osd[83123]: freelist _read_cfg
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs umount
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497f000 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:21 compute-0 sudo[83825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:11:21 compute-0 sudo[83825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:21 compute-0 sudo[83825]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:21 compute-0 sudo[83850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- inventory --format=json-pretty --filter-for-batch
Feb 02 11:11:21 compute-0 sudo[83850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497f000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497f000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497f000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bdev(0x55ba9497f000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluefs mount shared_bdev_used = 4718592
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: RocksDB version: 7.9.2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Git sha 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Compile date 2025-07-17 03:12:14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: DB SUMMARY
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: DB Session ID:  V5BWOFBQVROJXLLCRRAH
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: CURRENT file:  CURRENT
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                         Options.error_if_exists: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.create_if_missing: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                                     Options.env: 0x55ba94aee2a0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                                Options.info_log: 0x55ba94953940
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                              Options.statistics: (nil)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.use_fsync: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                              Options.db_log_dir: 
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                                 Options.wal_dir: db.wal
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.write_buffer_manager: 0x55ba94a4aa00
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.unordered_write: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.row_cache: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                              Options.wal_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.two_write_queues: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.wal_compression: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.atomic_flush: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.max_background_jobs: 4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.max_background_compactions: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.max_subcompactions: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.max_open_files: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Compression algorithms supported:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kZSTD supported: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kXpressCompression supported: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kBZip2Compression supported: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kLZ4Compression supported: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kZlibCompression supported: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         kSnappyCompression supported: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:           Options.merge_operator: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba94953ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba93b6e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.compression: LZ4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.num_levels: 7
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.bloom_locality: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                               Options.ttl: 2592000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                       Options.enable_blob_files: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                           Options.min_blob_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 939c09c6-9d64-4942-8540-4923a5d8a821
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030681617790, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030681628042, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030681, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "939c09c6-9d64-4942-8540-4923a5d8a821", "db_session_id": "V5BWOFBQVROJXLLCRRAH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030681631386, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030681, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "939c09c6-9d64-4942-8540-4923a5d8a821", "db_session_id": "V5BWOFBQVROJXLLCRRAH", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030681634790, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030681, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "939c09c6-9d64-4942-8540-4923a5d8a821", "db_session_id": "V5BWOFBQVROJXLLCRRAH", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030681636548, "job": 1, "event": "recovery_finished"}
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ba94b1a000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: DB pointer 0x55ba94afa000
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Feb 02 11:11:21 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:11:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 11:11:21 compute-0 ceph-osd[83123]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 02 11:11:21 compute-0 ceph-osd[83123]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 02 11:11:21 compute-0 ceph-osd[83123]: _get_class not permitted to load lua
Feb 02 11:11:21 compute-0 ceph-osd[83123]: _get_class not permitted to load sdk
Feb 02 11:11:21 compute-0 ceph-osd[83123]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 02 11:11:21 compute-0 ceph-osd[83123]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 02 11:11:21 compute-0 ceph-osd[83123]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 02 11:11:21 compute-0 ceph-osd[83123]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 02 11:11:21 compute-0 ceph-osd[83123]: osd.1 0 load_pgs
Feb 02 11:11:21 compute-0 ceph-osd[83123]: osd.1 0 load_pgs opened 0 pgs
Feb 02 11:11:21 compute-0 ceph-osd[83123]: osd.1 0 log_to_monitors true
Feb 02 11:11:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1[83119]: 2026-02-02T11:11:21.666+0000 7f1d7fbdd740 -1 osd.1 0 log_to_monitors true
Feb 02 11:11:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Feb 02 11:11:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/102745150,v1:192.168.122.100:6803/102745150]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Feb 02 11:11:21 compute-0 podman[84129]: 2026-02-02 11:11:21.797803038 +0000 UTC m=+0.031729446 container create e75ea243b34b6e5a4e67bbefdaa99ee6c9f539acb504ccac9fcbb227d1b2208a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:11:21 compute-0 systemd[1]: Started libpod-conmon-e75ea243b34b6e5a4e67bbefdaa99ee6c9f539acb504ccac9fcbb227d1b2208a.scope.
Feb 02 11:11:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:21 compute-0 podman[84129]: 2026-02-02 11:11:21.86065452 +0000 UTC m=+0.094580948 container init e75ea243b34b6e5a4e67bbefdaa99ee6c9f539acb504ccac9fcbb227d1b2208a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:11:21 compute-0 podman[84129]: 2026-02-02 11:11:21.866396952 +0000 UTC m=+0.100323350 container start e75ea243b34b6e5a4e67bbefdaa99ee6c9f539acb504ccac9fcbb227d1b2208a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ishizaka, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:11:21 compute-0 priceless_ishizaka[84143]: 167 167
Feb 02 11:11:21 compute-0 systemd[1]: libpod-e75ea243b34b6e5a4e67bbefdaa99ee6c9f539acb504ccac9fcbb227d1b2208a.scope: Deactivated successfully.
Feb 02 11:11:21 compute-0 conmon[84143]: conmon e75ea243b34b6e5a4e67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e75ea243b34b6e5a4e67bbefdaa99ee6c9f539acb504ccac9fcbb227d1b2208a.scope/container/memory.events
Feb 02 11:11:21 compute-0 podman[84129]: 2026-02-02 11:11:21.870354141 +0000 UTC m=+0.104280549 container attach e75ea243b34b6e5a4e67bbefdaa99ee6c9f539acb504ccac9fcbb227d1b2208a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb 02 11:11:21 compute-0 podman[84129]: 2026-02-02 11:11:21.871809045 +0000 UTC m=+0.105735463 container died e75ea243b34b6e5a4e67bbefdaa99ee6c9f539acb504ccac9fcbb227d1b2208a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:11:21 compute-0 podman[84129]: 2026-02-02 11:11:21.784374504 +0000 UTC m=+0.018300942 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f54447e58e4b84268acf528055ca77045d170b3337566cd80de3656683f628e-merged.mount: Deactivated successfully.
Feb 02 11:11:21 compute-0 podman[84129]: 2026-02-02 11:11:21.905664594 +0000 UTC m=+0.139591002 container remove e75ea243b34b6e5a4e67bbefdaa99ee6c9f539acb504ccac9fcbb227d1b2208a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ishizaka, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:11:21 compute-0 systemd[1]: libpod-conmon-e75ea243b34b6e5a4e67bbefdaa99ee6c9f539acb504ccac9fcbb227d1b2208a.scope: Deactivated successfully.
Feb 02 11:11:22 compute-0 podman[84166]: 2026-02-02 11:11:22.013791698 +0000 UTC m=+0.039114928 container create e02d59f70ef7a2840f8d6c5764c105647fd888bb95c5dd36a1afb9c355096da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:11:22 compute-0 systemd[1]: Started libpod-conmon-e02d59f70ef7a2840f8d6c5764c105647fd888bb95c5dd36a1afb9c355096da2.scope.
Feb 02 11:11:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c2c472899aa9bde6727fcff2c3c19023a03ae3ee714f4bb0c0354276dc8c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c2c472899aa9bde6727fcff2c3c19023a03ae3ee714f4bb0c0354276dc8c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c2c472899aa9bde6727fcff2c3c19023a03ae3ee714f4bb0c0354276dc8c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c2c472899aa9bde6727fcff2c3c19023a03ae3ee714f4bb0c0354276dc8c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:22 compute-0 podman[84166]: 2026-02-02 11:11:22.081441824 +0000 UTC m=+0.106765094 container init e02d59f70ef7a2840f8d6c5764c105647fd888bb95c5dd36a1afb9c355096da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moser, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:11:22 compute-0 podman[84166]: 2026-02-02 11:11:22.086883817 +0000 UTC m=+0.112207047 container start e02d59f70ef7a2840f8d6c5764c105647fd888bb95c5dd36a1afb9c355096da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 11:11:22 compute-0 podman[84166]: 2026-02-02 11:11:22.090800105 +0000 UTC m=+0.116123355 container attach e02d59f70ef7a2840f8d6c5764c105647fd888bb95c5dd36a1afb9c355096da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:11:22 compute-0 podman[84166]: 2026-02-02 11:11:21.995775086 +0000 UTC m=+0.021098346 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:11:22 compute-0 ceph-mon[74676]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:22 compute-0 ceph-mon[74676]: from='osd.0 [v2:192.168.122.101:6800/121046935,v1:192.168.122.101:6801/121046935]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb 02 11:11:22 compute-0 ceph-mon[74676]: osdmap e6: 2 total, 0 up, 2 in
Feb 02 11:11:22 compute-0 ceph-mon[74676]: from='osd.0 [v2:192.168.122.101:6800/121046935,v1:192.168.122.101:6801/121046935]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Feb 02 11:11:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:22 compute-0 ceph-mon[74676]: from='osd.1 [v2:192.168.122.100:6802/102745150,v1:192.168.122.100:6803/102745150]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/121046935,v1:192.168.122.101:6801/121046935]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/102745150,v1:192.168.122.100:6803/102745150]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/102745150,v1:192.168.122.100:6803/102745150]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:22 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:22 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 11:11:22 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/121046935; not ready for session (expect reconnect)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:22 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:11:22 compute-0 ceph-mgr[74969]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Feb 02 11:11:22 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:22 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 02 11:11:22 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 02 11:11:22 compute-0 sudo[85084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcufkcrtnbmpaoyqvtjkxtlsbeuaypzx ; /usr/bin/python3'
Feb 02 11:11:22 compute-0 sudo[85084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:11:22 compute-0 frosty_moser[84182]: [
Feb 02 11:11:22 compute-0 frosty_moser[84182]:     {
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         "available": false,
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         "being_replaced": false,
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         "ceph_device_lvm": false,
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         "lsm_data": {},
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         "lvs": [],
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         "path": "/dev/sr0",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         "rejected_reasons": [
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "Insufficient space (<5GB)",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "Has a FileSystem"
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         ],
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         "sys_api": {
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "actuators": null,
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "device_nodes": [
Feb 02 11:11:22 compute-0 frosty_moser[84182]:                 "sr0"
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             ],
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "devname": "sr0",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "human_readable_size": "482.00 KB",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "id_bus": "ata",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "model": "QEMU DVD-ROM",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "nr_requests": "2",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "parent": "/dev/sr0",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "partitions": {},
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "path": "/dev/sr0",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "removable": "1",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "rev": "2.5+",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "ro": "0",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "rotational": "1",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "sas_address": "",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "sas_device_handle": "",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "scheduler_mode": "mq-deadline",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "sectors": 0,
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "sectorsize": "2048",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "size": 493568.0,
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "support_discard": "2048",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "type": "disk",
Feb 02 11:11:22 compute-0 frosty_moser[84182]:             "vendor": "QEMU"
Feb 02 11:11:22 compute-0 frosty_moser[84182]:         }
Feb 02 11:11:22 compute-0 frosty_moser[84182]:     }
Feb 02 11:11:22 compute-0 frosty_moser[84182]: ]
Feb 02 11:11:22 compute-0 systemd[1]: libpod-e02d59f70ef7a2840f8d6c5764c105647fd888bb95c5dd36a1afb9c355096da2.scope: Deactivated successfully.
Feb 02 11:11:22 compute-0 conmon[84182]: conmon e02d59f70ef7a2840f8d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e02d59f70ef7a2840f8d6c5764c105647fd888bb95c5dd36a1afb9c355096da2.scope/container/memory.events
Feb 02 11:11:22 compute-0 podman[84166]: 2026-02-02 11:11:22.775801769 +0000 UTC m=+0.801124999 container died e02d59f70ef7a2840f8d6c5764c105647fd888bb95c5dd36a1afb9c355096da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c2c472899aa9bde6727fcff2c3c19023a03ae3ee714f4bb0c0354276dc8c72-merged.mount: Deactivated successfully.
Feb 02 11:11:22 compute-0 python3[85149]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:11:22 compute-0 podman[84166]: 2026-02-02 11:11:22.895524602 +0000 UTC m=+0.920847832 container remove e02d59f70ef7a2840f8d6c5764c105647fd888bb95c5dd36a1afb9c355096da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:11:22 compute-0 systemd[1]: libpod-conmon-e02d59f70ef7a2840f8d6c5764c105647fd888bb95c5dd36a1afb9c355096da2.scope: Deactivated successfully.
Feb 02 11:11:22 compute-0 podman[85302]: 2026-02-02 11:11:22.927442273 +0000 UTC m=+0.053672547 container create fe621c92ba1ac8cb51e47a505ecf3da061f96efebaca7d1fe17c4400039d74b6 (image=quay.io/ceph/ceph:v19, name=cranky_bose, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:11:22 compute-0 sudo[83850]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:11:22 compute-0 systemd[1]: Started libpod-conmon-fe621c92ba1ac8cb51e47a505ecf3da061f96efebaca7d1fe17c4400039d74b6.scope.
Feb 02 11:11:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:11:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3acb8991001fd1769d84388cf8fabbd498511af92d353b2935298c6ac20b00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3acb8991001fd1769d84388cf8fabbd498511af92d353b2935298c6ac20b00/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3acb8991001fd1769d84388cf8fabbd498511af92d353b2935298c6ac20b00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:23 compute-0 podman[85302]: 2026-02-02 11:11:22.907271395 +0000 UTC m=+0.033501679 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:11:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:11:23 compute-0 podman[85302]: 2026-02-02 11:11:23.014926385 +0000 UTC m=+0.141156669 container init fe621c92ba1ac8cb51e47a505ecf3da061f96efebaca7d1fe17c4400039d74b6 (image=quay.io/ceph/ceph:v19, name=cranky_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:11:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:11:23 compute-0 podman[85302]: 2026-02-02 11:11:23.020525454 +0000 UTC m=+0.146755708 container start fe621c92ba1ac8cb51e47a505ecf3da061f96efebaca7d1fe17c4400039d74b6 (image=quay.io/ceph/ceph:v19, name=cranky_bose, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:11:23 compute-0 podman[85302]: 2026-02-02 11:11:23.026234576 +0000 UTC m=+0.152464840 container attach fe621c92ba1ac8cb51e47a505ecf3da061f96efebaca7d1fe17c4400039d74b6 (image=quay.io/ceph/ceph:v19, name=cranky_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:11:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Feb 02 11:11:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:11:23 compute-0 ceph-mgr[74969]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Feb 02 11:11:23 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb 02 11:11:23 compute-0 ceph-mgr[74969]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134200524: error parsing value: Value '134200524' is below minimum 939524096
Feb 02 11:11:23 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134200524: error parsing value: Value '134200524' is below minimum 939524096
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:11:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/102745150,v1:192.168.122.100:6803/102745150]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Feb 02 11:11:23 compute-0 ceph-osd[83123]: osd.1 0 done with init, starting boot process
Feb 02 11:11:23 compute-0 ceph-osd[83123]: osd.1 0 start_boot
Feb 02 11:11:23 compute-0 ceph-osd[83123]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 02 11:11:23 compute-0 ceph-osd[83123]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 02 11:11:23 compute-0 ceph-osd[83123]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 02 11:11:23 compute-0 ceph-osd[83123]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 02 11:11:23 compute-0 ceph-osd[83123]: osd.1 0  bench count 12288000 bsize 4 KiB
Feb 02 11:11:23 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/121046935; not ready for session (expect reconnect)
Feb 02 11:11:23 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:23 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:11:23 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='osd.0 [v2:192.168.122.101:6800/121046935,v1:192.168.122.101:6801/121046935]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='osd.1 [v2:192.168.122.100:6802/102745150,v1:192.168.122.100:6803/102745150]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb 02 11:11:23 compute-0 ceph-mon[74676]: osdmap e7: 2 total, 0 up, 2 in
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='osd.1 [v2:192.168.122.100:6802/102745150,v1:192.168.122.100:6803/102745150]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:11:23 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 11:11:23 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 11:11:23 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/102745150; not ready for session (expect reconnect)
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:11:23 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:23 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 11:11:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 02 11:11:23 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3533899718' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb 02 11:11:23 compute-0 cranky_bose[85318]: 
Feb 02 11:11:23 compute-0 cranky_bose[85318]: {"fsid":"1d33f80b-d6ca-501c-bac7-184379b89279","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":82,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":8,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1770030670,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-02T11:09:58:610843+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-02T11:11:20.597384+0000","services":{}},"progress_events":{}}
Feb 02 11:11:23 compute-0 systemd[1]: libpod-fe621c92ba1ac8cb51e47a505ecf3da061f96efebaca7d1fe17c4400039d74b6.scope: Deactivated successfully.
Feb 02 11:11:23 compute-0 podman[85302]: 2026-02-02 11:11:23.457963568 +0000 UTC m=+0.584193832 container died fe621c92ba1ac8cb51e47a505ecf3da061f96efebaca7d1fe17c4400039d74b6 (image=quay.io/ceph/ceph:v19, name=cranky_bose, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f3acb8991001fd1769d84388cf8fabbd498511af92d353b2935298c6ac20b00-merged.mount: Deactivated successfully.
Feb 02 11:11:23 compute-0 podman[85302]: 2026-02-02 11:11:23.652385728 +0000 UTC m=+0.778615992 container remove fe621c92ba1ac8cb51e47a505ecf3da061f96efebaca7d1fe17c4400039d74b6 (image=quay.io/ceph/ceph:v19, name=cranky_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:11:23 compute-0 sudo[85084]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:23 compute-0 systemd[1]: libpod-conmon-fe621c92ba1ac8cb51e47a505ecf3da061f96efebaca7d1fe17c4400039d74b6.scope: Deactivated successfully.
Feb 02 11:11:24 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/121046935; not ready for session (expect reconnect)
Feb 02 11:11:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:24 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:24 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 11:11:24 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/102745150; not ready for session (expect reconnect)
Feb 02 11:11:24 compute-0 ceph-mon[74676]: purged_snaps scrub starts
Feb 02 11:11:24 compute-0 ceph-mon[74676]: purged_snaps scrub ok
Feb 02 11:11:24 compute-0 ceph-mon[74676]: Adjusting osd_memory_target on compute-1 to  5247M
Feb 02 11:11:24 compute-0 ceph-mon[74676]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:24 compute-0 ceph-mon[74676]: Adjusting osd_memory_target on compute-0 to 127.9M
Feb 02 11:11:24 compute-0 ceph-mon[74676]: Unable to set osd_memory_target on compute-0 to 134200524: error parsing value: Value '134200524' is below minimum 939524096
Feb 02 11:11:24 compute-0 ceph-mon[74676]: from='osd.1 [v2:192.168.122.100:6802/102745150,v1:192.168.122.100:6803/102745150]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 02 11:11:24 compute-0 ceph-mon[74676]: osdmap e8: 2 total, 0 up, 2 in
Feb 02 11:11:24 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:24 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:24 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:24 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3533899718' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb 02 11:11:24 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:11:24 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:24 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 11:11:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:25 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/121046935; not ready for session (expect reconnect)
Feb 02 11:11:25 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 11:11:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:25 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:25 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/102745150; not ready for session (expect reconnect)
Feb 02 11:11:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:11:25 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:25 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 11:11:25 compute-0 ceph-mon[74676]: purged_snaps scrub starts
Feb 02 11:11:25 compute-0 ceph-mon[74676]: purged_snaps scrub ok
Feb 02 11:11:25 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:25 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:25 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:11:26 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/121046935; not ready for session (expect reconnect)
Feb 02 11:11:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:26 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 11:11:26 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/102745150; not ready for session (expect reconnect)
Feb 02 11:11:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:11:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:26 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 11:11:26 compute-0 ceph-mon[74676]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:26 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:26 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:27 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/121046935; not ready for session (expect reconnect)
Feb 02 11:11:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:27 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:27 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 11:11:27 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/102745150; not ready for session (expect reconnect)
Feb 02 11:11:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:11:27 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:27 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 11:11:27 compute-0 ceph-osd[83123]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.865 iops: 6621.408 elapsed_sec: 0.453
Feb 02 11:11:27 compute-0 ceph-osd[83123]: log_channel(cluster) log [WRN] : OSD bench result of 6621.407814 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 02 11:11:27 compute-0 ceph-osd[83123]: osd.1 0 waiting for initial osdmap
Feb 02 11:11:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1[83119]: 2026-02-02T11:11:27.307+0000 7f1d7c373640 -1 osd.1 0 waiting for initial osdmap
Feb 02 11:11:27 compute-0 ceph-osd[83123]: osd.1 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb 02 11:11:27 compute-0 ceph-osd[83123]: osd.1 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb 02 11:11:27 compute-0 ceph-osd[83123]: osd.1 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb 02 11:11:27 compute-0 ceph-osd[83123]: osd.1 8 check_osdmap_features require_osd_release unknown -> squid
Feb 02 11:11:27 compute-0 ceph-osd[83123]: osd.1 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 02 11:11:27 compute-0 ceph-osd[83123]: osd.1 8 set_numa_affinity not setting numa affinity
Feb 02 11:11:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-osd-1[83119]: 2026-02-02T11:11:27.332+0000 7f1d77188640 -1 osd.1 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 02 11:11:27 compute-0 ceph-osd[83123]: osd.1 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Feb 02 11:11:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Feb 02 11:11:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:11:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Feb 02 11:11:27 compute-0 ceph-mon[74676]: OSD bench result of 1317.419840 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 02 11:11:27 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:27 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:27 compute-0 ceph-osd[83123]: osd.1 9 state: booting -> active
Feb 02 11:11:27 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/121046935,v1:192.168.122.101:6801/121046935] boot
Feb 02 11:11:27 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/102745150,v1:192.168.122.100:6803/102745150] boot
Feb 02 11:11:27 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Feb 02 11:11:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:11:27 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:11:27 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:27 compute-0 ceph-mgr[74969]: [devicehealth INFO root] creating mgr pool
Feb 02 11:11:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Feb 02 11:11:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Feb 02 11:11:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Feb 02 11:11:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 11:11:28 compute-0 ceph-mon[74676]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 11:11:28 compute-0 ceph-mon[74676]: OSD bench result of 6621.407814 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 02 11:11:28 compute-0 ceph-mon[74676]: osd.0 [v2:192.168.122.101:6800/121046935,v1:192.168.122.101:6801/121046935] boot
Feb 02 11:11:28 compute-0 ceph-mon[74676]: osd.1 [v2:192.168.122.100:6802/102745150,v1:192.168.122.100:6803/102745150] boot
Feb 02 11:11:28 compute-0 ceph-mon[74676]: osdmap e9: 2 total, 2 up, 2 in
Feb 02 11:11:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:11:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:11:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Feb 02 11:11:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb 02 11:11:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Feb 02 11:11:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Feb 02 11:11:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb 02 11:11:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb 02 11:11:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb 02 11:11:28 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Feb 02 11:11:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Feb 02 11:11:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Feb 02 11:11:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Feb 02 11:11:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Feb 02 11:11:29 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:11:29 compute-0 ceph-osd[83123]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 02 11:11:29 compute-0 ceph-osd[83123]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb 02 11:11:29 compute-0 ceph-osd[83123]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 02 11:11:29 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [1] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:11:29 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb 02 11:11:29 compute-0 ceph-mon[74676]: osdmap e10: 2 total, 2 up, 2 in
Feb 02 11:11:29 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Feb 02 11:11:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb 02 11:11:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Feb 02 11:11:29 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Feb 02 11:11:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:11:30 compute-0 ceph-mon[74676]: pgmap v38: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Feb 02 11:11:30 compute-0 ceph-mon[74676]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:11:30 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb 02 11:11:30 compute-0 ceph-mon[74676]: osdmap e11: 2 total, 2 up, 2 in
Feb 02 11:11:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Feb 02 11:11:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Feb 02 11:11:30 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Feb 02 11:11:30 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=10/12 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [1] r=0 lpr=10 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:11:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Feb 02 11:11:30 compute-0 ceph-mgr[74969]: [devicehealth INFO root] creating main.db for devicehealth
Feb 02 11:11:30 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Check health
Feb 02 11:11:30 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb 02 11:11:30 compute-0 sudo[85369]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Feb 02 11:11:30 compute-0 sudo[85369]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 02 11:11:30 compute-0 sudo[85369]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Feb 02 11:11:30 compute-0 sudo[85369]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:30 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb 02 11:11:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 11:11:30 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:11:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Feb 02 11:11:31 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Feb 02 11:11:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Feb 02 11:11:31 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Feb 02 11:11:31 compute-0 ceph-mon[74676]: osdmap e12: 2 total, 2 up, 2 in
Feb 02 11:11:31 compute-0 ceph-mon[74676]: pgmap v41: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Feb 02 11:11:31 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb 02 11:11:31 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb 02 11:11:31 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:11:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Feb 02 11:11:32 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dhyzzj(active, since 74s)
Feb 02 11:11:32 compute-0 ceph-mon[74676]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Feb 02 11:11:32 compute-0 ceph-mon[74676]: osdmap e13: 2 total, 2 up, 2 in
Feb 02 11:11:33 compute-0 ceph-mon[74676]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Feb 02 11:11:33 compute-0 ceph-mon[74676]: mgrmap e9: compute-0.dhyzzj(active, since 74s)
Feb 02 11:11:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:11:35 compute-0 ceph-mon[74676]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:37 compute-0 ceph-mon[74676]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:39 compute-0 ceph-mon[74676]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:11:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:41 compute-0 ceph-mon[74676]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:11:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:11:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:11:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:11:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb 02 11:11:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:11:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:11:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:11:43 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:11:43 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:11:43 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:11:43 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:11:44 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:11:44 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:11:44 compute-0 ceph-mon[74676]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:44 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:44 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:44 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:44 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:44 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:11:44 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:44 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:11:44 compute-0 ceph-mon[74676]: Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:11:44 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:11:44 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:11:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:11:44 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:11:44 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:11:44 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:44 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 02b0eea7-0491-48ab-8db1-a6756cae3b44 (Updating mon deployment (+2 -> 3))
Feb 02 11:11:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 02 11:11:44 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:11:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb 02 11:11:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:11:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:44 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Feb 02 11:11:44 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Feb 02 11:11:45 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Feb 02 11:11:45 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 02 11:11:45 compute-0 ceph-mon[74676]: Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:11:45 compute-0 ceph-mon[74676]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:11:45 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:45 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:45 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:45 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:11:45 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:11:45 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:11:46 compute-0 ceph-mon[74676]: Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:11:46 compute-0 ceph-mon[74676]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:46 compute-0 ceph-mon[74676]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:46 compute-0 ceph-mon[74676]: Deploying daemon mon.compute-2 on compute-2
Feb 02 11:11:46 compute-0 ceph-mon[74676]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Feb 02 11:11:46 compute-0 ceph-mon[74676]: Cluster is now healthy
Feb 02 11:11:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:11:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:11:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 02 11:11:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 02 11:11:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb 02 11:11:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1406303060; not ready for session (expect reconnect)
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:11:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 11:11:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:11:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb 02 11:11:47 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Feb 02 11:11:47 compute-0 ceph-mon[74676]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Feb 02 11:11:47 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:11:47 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:11:48 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1406303060; not ready for session (expect reconnect)
Feb 02 11:11:48 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:11:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:48 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb 02 11:11:48 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:48 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Feb 02 11:11:48 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Feb 02 11:11:48 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Feb 02 11:11:48 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:48 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:48 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Feb 02 11:11:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:49 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1406303060; not ready for session (expect reconnect)
Feb 02 11:11:49 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:11:49 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:49 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb 02 11:11:49 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:49 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:49 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:49 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Feb 02 11:11:50 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1406303060; not ready for session (expect reconnect)
Feb 02 11:11:50 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:11:50 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:50 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb 02 11:11:50 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Feb 02 11:11:50 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:50 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:50 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:50 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Feb 02 11:11:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:51 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1406303060; not ready for session (expect reconnect)
Feb 02 11:11:51 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:11:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:51 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb 02 11:11:51 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:51 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:51 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1406303060; not ready for session (expect reconnect)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb 02 11:11:52 compute-0 ceph-mon[74676]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : monmap epoch 2
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T11:11:47.267787+0000
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : created 2026-02-02T11:09:56.920509+0000
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap 
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dhyzzj(active, since 94s)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : overall HEALTH_OK
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 02b0eea7-0491-48ab-8db1-a6756cae3b44 (Updating mon deployment (+2 -> 3))
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 02b0eea7-0491-48ab-8db1-a6756cae3b44 (Updating mon deployment (+2 -> 3)) in 7 seconds
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 0b51d036-8cfc-446e-ba47-36f3ffcec5b4 (Updating mgr deployment (+2 -> 3))
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.zebspe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zebspe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zebspe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.zebspe on compute-2
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.zebspe on compute-2
Feb 02 11:11:52 compute-0 ceph-mon[74676]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:52 compute-0 ceph-mon[74676]: Deploying daemon mon.compute-1 on compute-1
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0 calling monitor election
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-2 calling monitor election
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: monmap epoch 2
Feb 02 11:11:52 compute-0 ceph-mon[74676]: fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:11:52 compute-0 ceph-mon[74676]: last_changed 2026-02-02T11:11:47.267787+0000
Feb 02 11:11:52 compute-0 ceph-mon[74676]: created 2026-02-02T11:09:56.920509+0000
Feb 02 11:11:52 compute-0 ceph-mon[74676]: min_mon_release 19 (squid)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: election_strategy: 1
Feb 02 11:11:52 compute-0 ceph-mon[74676]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 11:11:52 compute-0 ceph-mon[74676]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Feb 02 11:11:52 compute-0 ceph-mon[74676]: fsmap 
Feb 02 11:11:52 compute-0 ceph-mon[74676]: osdmap e13: 2 total, 2 up, 2 in
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mgrmap e9: compute-0.dhyzzj(active, since 94s)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: overall HEALTH_OK
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zebspe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zebspe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb 02 11:11:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Feb 02 11:11:52 compute-0 ceph-mon[74676]: paxos.0).electionLogic(10) init, last seen epoch 10
Feb 02 11:11:52 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:11:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:53 compute-0 ceph-mgr[74969]: mgr.server handle_report got status from non-daemon mon.compute-2
Feb 02 11:11:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:11:53.267+0000 7fef15581640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Feb 02 11:11:53 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 3 completed events
Feb 02 11:11:53 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:11:53 compute-0 sudo[85395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfynyhexplibhtcbsyupwyhhorgmqkla ; /usr/bin/python3'
Feb 02 11:11:53 compute-0 sudo[85395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:11:53 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:53 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:53 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:53 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:53 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb 02 11:11:53 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:11:53 compute-0 python3[85397]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:11:53 compute-0 podman[85399]: 2026-02-02 11:11:53.957538675 +0000 UTC m=+0.032153236 container create 379377fac896a59f62b4dea2bb39fa9395fee5d4bcda5389090cd770777b5fb1 (image=quay.io/ceph/ceph:v19, name=flamboyant_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:11:53 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:53 compute-0 systemd[1]: Started libpod-conmon-379377fac896a59f62b4dea2bb39fa9395fee5d4bcda5389090cd770777b5fb1.scope.
Feb 02 11:11:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d9ca70e0ce830cfae7bc6b8df456eefecb95117ab38085d6f42b4fba928099/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d9ca70e0ce830cfae7bc6b8df456eefecb95117ab38085d6f42b4fba928099/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d9ca70e0ce830cfae7bc6b8df456eefecb95117ab38085d6f42b4fba928099/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:54 compute-0 podman[85399]: 2026-02-02 11:11:53.943267529 +0000 UTC m=+0.017882120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:11:54 compute-0 podman[85399]: 2026-02-02 11:11:54.044298853 +0000 UTC m=+0.118913434 container init 379377fac896a59f62b4dea2bb39fa9395fee5d4bcda5389090cd770777b5fb1 (image=quay.io/ceph/ceph:v19, name=flamboyant_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:11:54 compute-0 podman[85399]: 2026-02-02 11:11:54.052171237 +0000 UTC m=+0.126785798 container start 379377fac896a59f62b4dea2bb39fa9395fee5d4bcda5389090cd770777b5fb1 (image=quay.io/ceph/ceph:v19, name=flamboyant_turing, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb 02 11:11:54 compute-0 podman[85399]: 2026-02-02 11:11:54.056949843 +0000 UTC m=+0.131564424 container attach 379377fac896a59f62b4dea2bb39fa9395fee5d4bcda5389090cd770777b5fb1 (image=quay.io/ceph/ceph:v19, name=flamboyant_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:11:54 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:54 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:54 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:54 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:54 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:54 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:54 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:54 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb 02 11:11:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:55 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:55 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:55 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:55 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:55 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:55 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb 02 11:11:56 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:56 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:56 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:56 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb 02 11:11:56 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb 02 11:11:57 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb 02 11:11:57 compute-0 ceph-mon[74676]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : monmap epoch 3
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T11:11:52.805819+0000
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : created 2026-02-02T11:09:56.920509+0000
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap 
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dhyzzj(active, since 100s)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : overall HEALTH_OK
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-2 calling monitor election
Feb 02 11:11:57 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0 calling monitor election
Feb 02 11:11:57 compute-0 ceph-mon[74676]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:57 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-1 calling monitor election
Feb 02 11:11:57 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mon[74676]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:57 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mon[74676]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:57 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: monmap epoch 3
Feb 02 11:11:57 compute-0 ceph-mon[74676]: fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:11:57 compute-0 ceph-mon[74676]: last_changed 2026-02-02T11:11:52.805819+0000
Feb 02 11:11:57 compute-0 ceph-mon[74676]: created 2026-02-02T11:09:56.920509+0000
Feb 02 11:11:57 compute-0 ceph-mon[74676]: min_mon_release 19 (squid)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: election_strategy: 1
Feb 02 11:11:57 compute-0 ceph-mon[74676]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 11:11:57 compute-0 ceph-mon[74676]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Feb 02 11:11:57 compute-0 ceph-mon[74676]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Feb 02 11:11:57 compute-0 ceph-mon[74676]: fsmap 
Feb 02 11:11:57 compute-0 ceph-mon[74676]: osdmap e13: 2 total, 2 up, 2 in
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mgrmap e9: compute-0.dhyzzj(active, since 100s)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: overall HEALTH_OK
Feb 02 11:11:57 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:57 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.iybsjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.iybsjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.iybsjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:57 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:57 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.iybsjv on compute-1
Feb 02 11:11:57 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.iybsjv on compute-1
Feb 02 11:11:58 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/809643405; not ready for session (expect reconnect)
Feb 02 11:11:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:11:58 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:58 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:58 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:58 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.iybsjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb 02 11:11:58 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.iybsjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 02 11:11:58 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:11:58 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:58 compute-0 ceph-mon[74676]: Deploying daemon mgr.compute-1.iybsjv on compute-1
Feb 02 11:11:58 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:11:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 02 11:11:58 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/973980814' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb 02 11:11:58 compute-0 flamboyant_turing[85415]: 
Feb 02 11:11:58 compute-0 flamboyant_turing[85415]: {"fsid":"1d33f80b-d6ca-501c-bac7-184379b89279","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":1,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1770030687,"num_in_osds":2,"osd_in_since":1770030670,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55656448,"bytes_avail":42885627904,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2026-02-02T11:09:58:610843+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-02T11:11:20.597384+0000","services":{}},"progress_events":{"0b51d036-8cfc-446e-ba47-36f3ffcec5b4":{"message":"Updating mgr deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Feb 02 11:11:58 compute-0 systemd[1]: libpod-379377fac896a59f62b4dea2bb39fa9395fee5d4bcda5389090cd770777b5fb1.scope: Deactivated successfully.
Feb 02 11:11:58 compute-0 podman[85399]: 2026-02-02 11:11:58.973425735 +0000 UTC m=+5.048040306 container died 379377fac896a59f62b4dea2bb39fa9395fee5d4bcda5389090cd770777b5fb1 (image=quay.io/ceph/ceph:v19, name=flamboyant_turing, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:11:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8d9ca70e0ce830cfae7bc6b8df456eefecb95117ab38085d6f42b4fba928099-merged.mount: Deactivated successfully.
Feb 02 11:11:59 compute-0 podman[85399]: 2026-02-02 11:11:59.015927964 +0000 UTC m=+5.090542525 container remove 379377fac896a59f62b4dea2bb39fa9395fee5d4bcda5389090cd770777b5fb1 (image=quay.io/ceph/ceph:v19, name=flamboyant_turing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:11:59 compute-0 systemd[1]: libpod-conmon-379377fac896a59f62b4dea2bb39fa9395fee5d4bcda5389090cd770777b5fb1.scope: Deactivated successfully.
Feb 02 11:11:59 compute-0 sudo[85395]: pam_unix(sudo:session): session closed for user root
Feb 02 11:11:59 compute-0 sudo[85474]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woxkwlwhdetuxomoehnaefiaopckcwkd ; /usr/bin/python3'
Feb 02 11:11:59 compute-0 sudo[85474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:11:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:11:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:11:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 11:11:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:59 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 0b51d036-8cfc-446e-ba47-36f3ffcec5b4 (Updating mgr deployment (+2 -> 3))
Feb 02 11:11:59 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 0b51d036-8cfc-446e-ba47-36f3ffcec5b4 (Updating mgr deployment (+2 -> 3)) in 7 seconds
Feb 02 11:11:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 11:11:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:59 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev f4bfd484-8f5c-49bf-9242-41209cc1c35b (Updating crash deployment (+1 -> 3))
Feb 02 11:11:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 02 11:11:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb 02 11:11:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 02 11:11:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:11:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:59 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Feb 02 11:11:59 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Feb 02 11:11:59 compute-0 python3[85476]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:11:59 compute-0 podman[85477]: 2026-02-02 11:11:59.489681901 +0000 UTC m=+0.048120380 container create 45713f3aa9b3435ed94843c39e93bca90d1d1a0515f36dfbbe96dadbf2464f02 (image=quay.io/ceph/ceph:v19, name=goofy_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:11:59 compute-0 systemd[1]: Started libpod-conmon-45713f3aa9b3435ed94843c39e93bca90d1d1a0515f36dfbbe96dadbf2464f02.scope.
Feb 02 11:11:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25de8541b04df96a967e5e137c5a4366862f5d665a306e4e26cefcd50a43b7e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25de8541b04df96a967e5e137c5a4366862f5d665a306e4e26cefcd50a43b7e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:11:59 compute-0 podman[85477]: 2026-02-02 11:11:59.562834152 +0000 UTC m=+0.121272651 container init 45713f3aa9b3435ed94843c39e93bca90d1d1a0515f36dfbbe96dadbf2464f02 (image=quay.io/ceph/ceph:v19, name=goofy_colden, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:11:59 compute-0 podman[85477]: 2026-02-02 11:11:59.472187303 +0000 UTC m=+0.030625802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:11:59 compute-0 podman[85477]: 2026-02-02 11:11:59.569211163 +0000 UTC m=+0.127649632 container start 45713f3aa9b3435ed94843c39e93bca90d1d1a0515f36dfbbe96dadbf2464f02 (image=quay.io/ceph/ceph:v19, name=goofy_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:11:59 compute-0 podman[85477]: 2026-02-02 11:11:59.572300691 +0000 UTC m=+0.130739250 container attach 45713f3aa9b3435ed94843c39e93bca90d1d1a0515f36dfbbe96dadbf2464f02 (image=quay.io/ceph/ceph:v19, name=goofy_colden, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 11:11:59 compute-0 ceph-mgr[74969]: mgr.server handle_report got status from non-daemon mon.compute-1
Feb 02 11:11:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:11:59.806+0000 7fef15581640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Feb 02 11:11:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 11:11:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2739480920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:11:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/973980814' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb 02 11:11:59 compute-0 ceph-mon[74676]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:11:59 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:59 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:59 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:59 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:11:59 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb 02 11:11:59 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 02 11:11:59 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:11:59 compute-0 ceph-mon[74676]: Deploying daemon crash.compute-2 on compute-2
Feb 02 11:12:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Feb 02 11:12:00 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2739480920' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Feb 02 11:12:00 compute-0 goofy_colden[85492]: pool 'vms' created
Feb 02 11:12:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Feb 02 11:12:00 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:00 compute-0 systemd[1]: libpod-45713f3aa9b3435ed94843c39e93bca90d1d1a0515f36dfbbe96dadbf2464f02.scope: Deactivated successfully.
Feb 02 11:12:00 compute-0 podman[85477]: 2026-02-02 11:12:00.395040405 +0000 UTC m=+0.953478884 container died 45713f3aa9b3435ed94843c39e93bca90d1d1a0515f36dfbbe96dadbf2464f02 (image=quay.io/ceph/ceph:v19, name=goofy_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-25de8541b04df96a967e5e137c5a4366862f5d665a306e4e26cefcd50a43b7e4-merged.mount: Deactivated successfully.
Feb 02 11:12:00 compute-0 podman[85477]: 2026-02-02 11:12:00.441847817 +0000 UTC m=+1.000286306 container remove 45713f3aa9b3435ed94843c39e93bca90d1d1a0515f36dfbbe96dadbf2464f02 (image=quay.io/ceph/ceph:v19, name=goofy_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:12:00 compute-0 systemd[1]: libpod-conmon-45713f3aa9b3435ed94843c39e93bca90d1d1a0515f36dfbbe96dadbf2464f02.scope: Deactivated successfully.
Feb 02 11:12:00 compute-0 sudo[85474]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:12:00 compute-0 sudo[85555]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbddjgllugkdqhfzsljcpjejosfbcujw ; /usr/bin/python3'
Feb 02 11:12:00 compute-0 sudo[85555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:00 compute-0 python3[85557]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:00 compute-0 podman[85558]: 2026-02-02 11:12:00.76398418 +0000 UTC m=+0.055914952 container create c44f52c1fb92eb5729cb7949bf36e8196d435317351f49cbeb0663082441411a (image=quay.io/ceph/ceph:v19, name=awesome_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:12:00 compute-0 systemd[1]: Started libpod-conmon-c44f52c1fb92eb5729cb7949bf36e8196d435317351f49cbeb0663082441411a.scope.
Feb 02 11:12:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:00 compute-0 podman[85558]: 2026-02-02 11:12:00.731600359 +0000 UTC m=+0.023531141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58e361c20cf20b6eb0327a20cd215dd4fe596794e457a9e08fa9d91fda7a22b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58e361c20cf20b6eb0327a20cd215dd4fe596794e457a9e08fa9d91fda7a22b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:00 compute-0 podman[85558]: 2026-02-02 11:12:00.849838342 +0000 UTC m=+0.141769164 container init c44f52c1fb92eb5729cb7949bf36e8196d435317351f49cbeb0663082441411a (image=quay.io/ceph/ceph:v19, name=awesome_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:00 compute-0 podman[85558]: 2026-02-02 11:12:00.854292849 +0000 UTC m=+0.146223621 container start c44f52c1fb92eb5729cb7949bf36e8196d435317351f49cbeb0663082441411a (image=quay.io/ceph/ceph:v19, name=awesome_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:12:00 compute-0 podman[85558]: 2026-02-02 11:12:00.85962375 +0000 UTC m=+0.151554552 container attach c44f52c1fb92eb5729cb7949bf36e8196d435317351f49cbeb0663082441411a (image=quay.io/ceph/ceph:v19, name=awesome_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb 02 11:12:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:12:00 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2739480920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:00 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2739480920' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:00 compute-0 ceph-mon[74676]: osdmap e14: 2 total, 2 up, 2 in
Feb 02 11:12:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v59: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:00 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:01 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev f4bfd484-8f5c-49bf-9242-41209cc1c35b (Updating crash deployment (+1 -> 3))
Feb 02 11:12:01 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event f4bfd484-8f5c-49bf-9242-41209cc1c35b (Updating crash deployment (+1 -> 3)) in 2 seconds
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:01 compute-0 sudo[85596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:01 compute-0 sudo[85596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:01 compute-0 sudo[85596]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:01 compute-0 sudo[85621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:12:01 compute-0 sudo[85621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3131283507' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3131283507' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Feb 02 11:12:01 compute-0 awesome_faraday[85573]: pool 'volumes' created
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Feb 02 11:12:01 compute-0 systemd[1]: libpod-c44f52c1fb92eb5729cb7949bf36e8196d435317351f49cbeb0663082441411a.scope: Deactivated successfully.
Feb 02 11:12:01 compute-0 podman[85558]: 2026-02-02 11:12:01.413190588 +0000 UTC m=+0.705121360 container died c44f52c1fb92eb5729cb7949bf36e8196d435317351f49cbeb0663082441411a (image=quay.io/ceph/ceph:v19, name=awesome_faraday, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:01 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b58e361c20cf20b6eb0327a20cd215dd4fe596794e457a9e08fa9d91fda7a22b-merged.mount: Deactivated successfully.
Feb 02 11:12:01 compute-0 podman[85558]: 2026-02-02 11:12:01.48355816 +0000 UTC m=+0.775488932 container remove c44f52c1fb92eb5729cb7949bf36e8196d435317351f49cbeb0663082441411a (image=quay.io/ceph/ceph:v19, name=awesome_faraday, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:12:01 compute-0 systemd[1]: libpod-conmon-c44f52c1fb92eb5729cb7949bf36e8196d435317351f49cbeb0663082441411a.scope: Deactivated successfully.
Feb 02 11:12:01 compute-0 sudo[85555]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:01 compute-0 podman[85700]: 2026-02-02 11:12:01.546110029 +0000 UTC m=+0.041470931 container create 564e622c424a4c7f93d7e9583cf3d5439098c6dd31af759f76e450039eafcc0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:01 compute-0 systemd[1]: Started libpod-conmon-564e622c424a4c7f93d7e9583cf3d5439098c6dd31af759f76e450039eafcc0b.scope.
Feb 02 11:12:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:01 compute-0 sudo[85739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ritjzuvxkxomzcissuuznvvbjexubuck ; /usr/bin/python3'
Feb 02 11:12:01 compute-0 sudo[85739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:01 compute-0 podman[85700]: 2026-02-02 11:12:01.613695842 +0000 UTC m=+0.109056764 container init 564e622c424a4c7f93d7e9583cf3d5439098c6dd31af759f76e450039eafcc0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:01 compute-0 podman[85700]: 2026-02-02 11:12:01.618089657 +0000 UTC m=+0.113450559 container start 564e622c424a4c7f93d7e9583cf3d5439098c6dd31af759f76e450039eafcc0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:12:01 compute-0 pedantic_kepler[85740]: 167 167
Feb 02 11:12:01 compute-0 systemd[1]: libpod-564e622c424a4c7f93d7e9583cf3d5439098c6dd31af759f76e450039eafcc0b.scope: Deactivated successfully.
Feb 02 11:12:01 compute-0 podman[85700]: 2026-02-02 11:12:01.527571322 +0000 UTC m=+0.022932244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:01 compute-0 podman[85700]: 2026-02-02 11:12:01.622893473 +0000 UTC m=+0.118254455 container attach 564e622c424a4c7f93d7e9583cf3d5439098c6dd31af759f76e450039eafcc0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb 02 11:12:01 compute-0 podman[85700]: 2026-02-02 11:12:01.623086419 +0000 UTC m=+0.118447321 container died 564e622c424a4c7f93d7e9583cf3d5439098c6dd31af759f76e450039eafcc0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d36b3084d6c4a0fa4e217c9e930968c2b3d36e498befe6b2c8dd97e92d92b00c-merged.mount: Deactivated successfully.
Feb 02 11:12:01 compute-0 podman[85700]: 2026-02-02 11:12:01.665817884 +0000 UTC m=+0.161178786 container remove 564e622c424a4c7f93d7e9583cf3d5439098c6dd31af759f76e450039eafcc0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:01 compute-0 systemd[1]: libpod-conmon-564e622c424a4c7f93d7e9583cf3d5439098c6dd31af759f76e450039eafcc0b.scope: Deactivated successfully.
Feb 02 11:12:01 compute-0 python3[85744]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:01 compute-0 podman[85767]: 2026-02-02 11:12:01.803410088 +0000 UTC m=+0.064599378 container create e4919f4e13ab1ec730cfad2676b8b903e356847014770f5439231bfc87c113b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb 02 11:12:01 compute-0 podman[85769]: 2026-02-02 11:12:01.8407216 +0000 UTC m=+0.099593234 container create 5d31ad6abe9eeb34b7e1878d5b19ec5dddfaae8bfbe74873091476d8c59b7235 (image=quay.io/ceph/ceph:v19, name=modest_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb 02 11:12:01 compute-0 systemd[1]: Started libpod-conmon-e4919f4e13ab1ec730cfad2676b8b903e356847014770f5439231bfc87c113b9.scope.
Feb 02 11:12:01 compute-0 podman[85767]: 2026-02-02 11:12:01.763579995 +0000 UTC m=+0.024769285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:01 compute-0 podman[85769]: 2026-02-02 11:12:01.764560973 +0000 UTC m=+0.023432627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:01 compute-0 systemd[1]: Started libpod-conmon-5d31ad6abe9eeb34b7e1878d5b19ec5dddfaae8bfbe74873091476d8c59b7235.scope.
Feb 02 11:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7f4829b062eda34f93a9bf8d5578b7b31a5a5eed9e1da9794286eb70bade59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7f4829b062eda34f93a9bf8d5578b7b31a5a5eed9e1da9794286eb70bade59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7f4829b062eda34f93a9bf8d5578b7b31a5a5eed9e1da9794286eb70bade59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7f4829b062eda34f93a9bf8d5578b7b31a5a5eed9e1da9794286eb70bade59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7f4829b062eda34f93a9bf8d5578b7b31a5a5eed9e1da9794286eb70bade59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975c93f952ec0516be6501b634a8b26a601782d4cfc1a371ae3b093279c6e904/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975c93f952ec0516be6501b634a8b26a601782d4cfc1a371ae3b093279c6e904/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:01 compute-0 podman[85767]: 2026-02-02 11:12:01.912698267 +0000 UTC m=+0.173887557 container init e4919f4e13ab1ec730cfad2676b8b903e356847014770f5439231bfc87c113b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:01 compute-0 podman[85767]: 2026-02-02 11:12:01.918099101 +0000 UTC m=+0.179288391 container start e4919f4e13ab1ec730cfad2676b8b903e356847014770f5439231bfc87c113b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:01 compute-0 podman[85769]: 2026-02-02 11:12:01.923961407 +0000 UTC m=+0.182833061 container init 5d31ad6abe9eeb34b7e1878d5b19ec5dddfaae8bfbe74873091476d8c59b7235 (image=quay.io/ceph/ceph:v19, name=modest_joliot, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:12:01 compute-0 podman[85769]: 2026-02-02 11:12:01.928465976 +0000 UTC m=+0.187337610 container start 5d31ad6abe9eeb34b7e1878d5b19ec5dddfaae8bfbe74873091476d8c59b7235 (image=quay.io/ceph/ceph:v19, name=modest_joliot, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:01 compute-0 podman[85767]: 2026-02-02 11:12:01.933338564 +0000 UTC m=+0.194527854 container attach e4919f4e13ab1ec730cfad2676b8b903e356847014770f5439231bfc87c113b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:12:01 compute-0 podman[85769]: 2026-02-02 11:12:01.941019523 +0000 UTC m=+0.199891177 container attach 5d31ad6abe9eeb34b7e1878d5b19ec5dddfaae8bfbe74873091476d8c59b7235 (image=quay.io/ceph/ceph:v19, name=modest_joliot, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:12:01 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:12:02 compute-0 ceph-mon[74676]: pgmap v59: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3131283507' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3131283507' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:02 compute-0 ceph-mon[74676]: osdmap e15: 2 total, 2 up, 2 in
Feb 02 11:12:02 compute-0 ceph-mon[74676]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:12:02 compute-0 compassionate_haibt[85794]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:12:02 compute-0 compassionate_haibt[85794]: --> All data devices are unavailable
Feb 02 11:12:02 compute-0 systemd[1]: libpod-e4919f4e13ab1ec730cfad2676b8b903e356847014770f5439231bfc87c113b9.scope: Deactivated successfully.
Feb 02 11:12:02 compute-0 podman[85767]: 2026-02-02 11:12:02.253567814 +0000 UTC m=+0.514757114 container died e4919f4e13ab1ec730cfad2676b8b903e356847014770f5439231bfc87c113b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 11:12:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1274120742' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:02 compute-0 podman[85767]: 2026-02-02 11:12:02.318501371 +0000 UTC m=+0.579690661 container remove e4919f4e13ab1ec730cfad2676b8b903e356847014770f5439231bfc87c113b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb 02 11:12:02 compute-0 systemd[1]: libpod-conmon-e4919f4e13ab1ec730cfad2676b8b903e356847014770f5439231bfc87c113b9.scope: Deactivated successfully.
Feb 02 11:12:02 compute-0 sudo[85621]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Feb 02 11:12:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1274120742' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Feb 02 11:12:02 compute-0 modest_joliot[85802]: pool 'backups' created
Feb 02 11:12:02 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Feb 02 11:12:02 compute-0 systemd[1]: libpod-5d31ad6abe9eeb34b7e1878d5b19ec5dddfaae8bfbe74873091476d8c59b7235.scope: Deactivated successfully.
Feb 02 11:12:02 compute-0 podman[85769]: 2026-02-02 11:12:02.432168954 +0000 UTC m=+0.691040598 container died 5d31ad6abe9eeb34b7e1878d5b19ec5dddfaae8bfbe74873091476d8c59b7235 (image=quay.io/ceph/ceph:v19, name=modest_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb 02 11:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b7f4829b062eda34f93a9bf8d5578b7b31a5a5eed9e1da9794286eb70bade59-merged.mount: Deactivated successfully.
Feb 02 11:12:02 compute-0 sudo[85852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:02 compute-0 sudo[85852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:02 compute-0 sudo[85852]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-975c93f952ec0516be6501b634a8b26a601782d4cfc1a371ae3b093279c6e904-merged.mount: Deactivated successfully.
Feb 02 11:12:02 compute-0 sudo[85889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:12:02 compute-0 sudo[85889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:02 compute-0 podman[85769]: 2026-02-02 11:12:02.523354488 +0000 UTC m=+0.782226122 container remove 5d31ad6abe9eeb34b7e1878d5b19ec5dddfaae8bfbe74873091476d8c59b7235 (image=quay.io/ceph/ceph:v19, name=modest_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:12:02 compute-0 systemd[1]: libpod-conmon-5d31ad6abe9eeb34b7e1878d5b19ec5dddfaae8bfbe74873091476d8c59b7235.scope: Deactivated successfully.
Feb 02 11:12:02 compute-0 sudo[85739]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "90675962-4b5b-4f4f-921d-7595b967b230"} v 0)
Feb 02 11:12:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "90675962-4b5b-4f4f-921d-7595b967b230"}]: dispatch
Feb 02 11:12:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Feb 02 11:12:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "90675962-4b5b-4f4f-921d-7595b967b230"}]': finished
Feb 02 11:12:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Feb 02 11:12:02 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Feb 02 11:12:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:02 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:02 compute-0 sudo[85937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgslwmjynjexhddreseqktduhegqkire ; /usr/bin/python3'
Feb 02 11:12:02 compute-0 sudo[85937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:02 compute-0 python3[85939]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:02 compute-0 podman[85981]: 2026-02-02 11:12:02.867707924 +0000 UTC m=+0.047284136 container create f08565254f1e85d9cea91af729c09a0f87958ff9686fc08638b24b72880d9975 (image=quay.io/ceph/ceph:v19, name=hopeful_lovelace, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:02 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 5 completed events
Feb 02 11:12:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:12:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:02 compute-0 podman[85982]: 2026-02-02 11:12:02.904302175 +0000 UTC m=+0.076467116 container create ac18c88680185e9778e252fe32ac013a1d483a78db71e08c3d409ed78f3a14d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:02 compute-0 systemd[1]: Started libpod-conmon-f08565254f1e85d9cea91af729c09a0f87958ff9686fc08638b24b72880d9975.scope.
Feb 02 11:12:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:02 compute-0 systemd[1]: Started libpod-conmon-ac18c88680185e9778e252fe32ac013a1d483a78db71e08c3d409ed78f3a14d4.scope.
Feb 02 11:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66f50b011ae837c5f005bac259d9eacde7340df57eee418d74cc61a45dc4ec6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66f50b011ae837c5f005bac259d9eacde7340df57eee418d74cc61a45dc4ec6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:02 compute-0 podman[85981]: 2026-02-02 11:12:02.839277055 +0000 UTC m=+0.018853307 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:02 compute-0 podman[85981]: 2026-02-02 11:12:02.944714975 +0000 UTC m=+0.124291217 container init f08565254f1e85d9cea91af729c09a0f87958ff9686fc08638b24b72880d9975 (image=quay.io/ceph/ceph:v19, name=hopeful_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:02 compute-0 podman[85982]: 2026-02-02 11:12:02.854511289 +0000 UTC m=+0.026676240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:02 compute-0 podman[85981]: 2026-02-02 11:12:02.949265664 +0000 UTC m=+0.128841896 container start f08565254f1e85d9cea91af729c09a0f87958ff9686fc08638b24b72880d9975 (image=quay.io/ceph/ceph:v19, name=hopeful_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:12:02 compute-0 podman[85981]: 2026-02-02 11:12:02.955871852 +0000 UTC m=+0.135448074 container attach f08565254f1e85d9cea91af729c09a0f87958ff9686fc08638b24b72880d9975 (image=quay.io/ceph/ceph:v19, name=hopeful_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:12:02 compute-0 podman[85982]: 2026-02-02 11:12:02.96248896 +0000 UTC m=+0.134653901 container init ac18c88680185e9778e252fe32ac013a1d483a78db71e08c3d409ed78f3a14d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:02 compute-0 podman[85982]: 2026-02-02 11:12:02.966636998 +0000 UTC m=+0.138801919 container start ac18c88680185e9778e252fe32ac013a1d483a78db71e08c3d409ed78f3a14d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb 02 11:12:02 compute-0 sweet_taussig[86017]: 167 167
Feb 02 11:12:02 compute-0 systemd[1]: libpod-ac18c88680185e9778e252fe32ac013a1d483a78db71e08c3d409ed78f3a14d4.scope: Deactivated successfully.
Feb 02 11:12:02 compute-0 podman[85982]: 2026-02-02 11:12:02.984573659 +0000 UTC m=+0.156738610 container attach ac18c88680185e9778e252fe32ac013a1d483a78db71e08c3d409ed78f3a14d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:02 compute-0 podman[85982]: 2026-02-02 11:12:02.984989601 +0000 UTC m=+0.157154522 container died ac18c88680185e9778e252fe32ac013a1d483a78db71e08c3d409ed78f3a14d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v63: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7346abe5b488f621b26f8f6f061ff8dd8de27c4f69b6a72dcaa295fdc036882a-merged.mount: Deactivated successfully.
Feb 02 11:12:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1274120742' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1274120742' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:03 compute-0 ceph-mon[74676]: osdmap e16: 2 total, 2 up, 2 in
Feb 02 11:12:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1200564200' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "90675962-4b5b-4f4f-921d-7595b967b230"}]: dispatch
Feb 02 11:12:03 compute-0 ceph-mon[74676]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "90675962-4b5b-4f4f-921d-7595b967b230"}]: dispatch
Feb 02 11:12:03 compute-0 ceph-mon[74676]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "90675962-4b5b-4f4f-921d-7595b967b230"}]': finished
Feb 02 11:12:03 compute-0 ceph-mon[74676]: osdmap e17: 3 total, 2 up, 3 in
Feb 02 11:12:03 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:03 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:03 compute-0 podman[85982]: 2026-02-02 11:12:03.056643159 +0000 UTC m=+0.228808110 container remove ac18c88680185e9778e252fe32ac013a1d483a78db71e08c3d409ed78f3a14d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:03 compute-0 systemd[1]: libpod-conmon-ac18c88680185e9778e252fe32ac013a1d483a78db71e08c3d409ed78f3a14d4.scope: Deactivated successfully.
Feb 02 11:12:03 compute-0 podman[86063]: 2026-02-02 11:12:03.20958991 +0000 UTC m=+0.053390070 container create a36948c14eac7525c4abc9c89bd65531112d0716a5972d906347bec303fe2d4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:12:03 compute-0 systemd[1]: Started libpod-conmon-a36948c14eac7525c4abc9c89bd65531112d0716a5972d906347bec303fe2d4b.scope.
Feb 02 11:12:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7596497a6800df964027e1e5c9974d0871ce192bdd16c33593523141377120d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7596497a6800df964027e1e5c9974d0871ce192bdd16c33593523141377120d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7596497a6800df964027e1e5c9974d0871ce192bdd16c33593523141377120d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7596497a6800df964027e1e5c9974d0871ce192bdd16c33593523141377120d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:03 compute-0 podman[86063]: 2026-02-02 11:12:03.184763634 +0000 UTC m=+0.028563824 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:03 compute-0 podman[86063]: 2026-02-02 11:12:03.290703907 +0000 UTC m=+0.134504097 container init a36948c14eac7525c4abc9c89bd65531112d0716a5972d906347bec303fe2d4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_roentgen, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:03 compute-0 podman[86063]: 2026-02-02 11:12:03.296816581 +0000 UTC m=+0.140616791 container start a36948c14eac7525c4abc9c89bd65531112d0716a5972d906347bec303fe2d4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:03 compute-0 podman[86063]: 2026-02-02 11:12:03.305063676 +0000 UTC m=+0.148863866 container attach a36948c14eac7525c4abc9c89bd65531112d0716a5972d906347bec303fe2d4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_roentgen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb 02 11:12:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 11:12:03 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2006605447' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]: {
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:     "1": [
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:         {
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "devices": [
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "/dev/loop3"
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             ],
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "lv_name": "ceph_lv0",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "lv_size": "21470642176",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "name": "ceph_lv0",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "tags": {
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.cluster_name": "ceph",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.crush_device_class": "",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.encrypted": "0",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.osd_id": "1",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.type": "block",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.vdo": "0",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:                 "ceph.with_tpm": "0"
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             },
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "type": "block",
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:             "vg_name": "ceph_vg0"
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:         }
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]:     ]
Feb 02 11:12:03 compute-0 awesome_roentgen[86080]: }
Feb 02 11:12:03 compute-0 systemd[1]: libpod-a36948c14eac7525c4abc9c89bd65531112d0716a5972d906347bec303fe2d4b.scope: Deactivated successfully.
Feb 02 11:12:03 compute-0 podman[86063]: 2026-02-02 11:12:03.587681875 +0000 UTC m=+0.431482045 container died a36948c14eac7525c4abc9c89bd65531112d0716a5972d906347bec303fe2d4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7596497a6800df964027e1e5c9974d0871ce192bdd16c33593523141377120d-merged.mount: Deactivated successfully.
Feb 02 11:12:03 compute-0 podman[86063]: 2026-02-02 11:12:03.629943758 +0000 UTC m=+0.473743928 container remove a36948c14eac7525c4abc9c89bd65531112d0716a5972d906347bec303fe2d4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:03 compute-0 systemd[1]: libpod-conmon-a36948c14eac7525c4abc9c89bd65531112d0716a5972d906347bec303fe2d4b.scope: Deactivated successfully.
Feb 02 11:12:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Feb 02 11:12:03 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2006605447' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Feb 02 11:12:03 compute-0 hopeful_lovelace[86011]: pool 'images' created
Feb 02 11:12:03 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Feb 02 11:12:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:03 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:03 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:03 compute-0 systemd[1]: libpod-f08565254f1e85d9cea91af729c09a0f87958ff9686fc08638b24b72880d9975.scope: Deactivated successfully.
Feb 02 11:12:03 compute-0 podman[85981]: 2026-02-02 11:12:03.689521602 +0000 UTC m=+0.869097824 container died f08565254f1e85d9cea91af729c09a0f87958ff9686fc08638b24b72880d9975 (image=quay.io/ceph/ceph:v19, name=hopeful_lovelace, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:12:03 compute-0 sudo[85889]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a66f50b011ae837c5f005bac259d9eacde7340df57eee418d74cc61a45dc4ec6-merged.mount: Deactivated successfully.
Feb 02 11:12:03 compute-0 podman[85981]: 2026-02-02 11:12:03.746902005 +0000 UTC m=+0.926478227 container remove f08565254f1e85d9cea91af729c09a0f87958ff9686fc08638b24b72880d9975 (image=quay.io/ceph/ceph:v19, name=hopeful_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:12:03 compute-0 systemd[1]: libpod-conmon-f08565254f1e85d9cea91af729c09a0f87958ff9686fc08638b24b72880d9975.scope: Deactivated successfully.
Feb 02 11:12:03 compute-0 sudo[86110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:03 compute-0 sudo[86110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:03 compute-0 sudo[86110]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:03 compute-0 sudo[85937]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:03 compute-0 sudo[86140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:12:03 compute-0 sudo[86140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:03 compute-0 sudo[86188]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxubvnwwnjwesknucaeldqtrejrobdot ; /usr/bin/python3'
Feb 02 11:12:03 compute-0 sudo[86188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:04 compute-0 python3[86190]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:04 compute-0 ceph-mon[74676]: pgmap v63: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1643319813' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Feb 02 11:12:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2006605447' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2006605447' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:04 compute-0 ceph-mon[74676]: osdmap e18: 3 total, 2 up, 3 in
Feb 02 11:12:04 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:04 compute-0 podman[86203]: 2026-02-02 11:12:04.061427592 +0000 UTC m=+0.037080155 container create aee38123ffcbb28051101b72cb54ab30754c2c96f2004502da39cb2869f45275 (image=quay.io/ceph/ceph:v19, name=goofy_shtern, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:12:04 compute-0 systemd[1]: Started libpod-conmon-aee38123ffcbb28051101b72cb54ab30754c2c96f2004502da39cb2869f45275.scope.
Feb 02 11:12:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873b5196e1755f07b23fdfbe2f474a49f52de21bb5ef0651f33951be0b885259/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873b5196e1755f07b23fdfbe2f474a49f52de21bb5ef0651f33951be0b885259/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:04 compute-0 podman[86203]: 2026-02-02 11:12:04.123522109 +0000 UTC m=+0.099174692 container init aee38123ffcbb28051101b72cb54ab30754c2c96f2004502da39cb2869f45275 (image=quay.io/ceph/ceph:v19, name=goofy_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb 02 11:12:04 compute-0 podman[86203]: 2026-02-02 11:12:04.128906152 +0000 UTC m=+0.104558715 container start aee38123ffcbb28051101b72cb54ab30754c2c96f2004502da39cb2869f45275 (image=quay.io/ceph/ceph:v19, name=goofy_shtern, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:12:04 compute-0 podman[86203]: 2026-02-02 11:12:04.133668397 +0000 UTC m=+0.109320970 container attach aee38123ffcbb28051101b72cb54ab30754c2c96f2004502da39cb2869f45275 (image=quay.io/ceph/ceph:v19, name=goofy_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:12:04 compute-0 podman[86203]: 2026-02-02 11:12:04.04588236 +0000 UTC m=+0.021534923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:04 compute-0 podman[86248]: 2026-02-02 11:12:04.198176432 +0000 UTC m=+0.040647647 container create 22f7a2843cfc6d38bf9bb4c61c84667fbb2adb71d29e34b64b6c4a98575dda3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:04 compute-0 systemd[1]: Started libpod-conmon-22f7a2843cfc6d38bf9bb4c61c84667fbb2adb71d29e34b64b6c4a98575dda3c.scope.
Feb 02 11:12:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:04 compute-0 podman[86248]: 2026-02-02 11:12:04.263070088 +0000 UTC m=+0.105541303 container init 22f7a2843cfc6d38bf9bb4c61c84667fbb2adb71d29e34b64b6c4a98575dda3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_vaughan, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 11:12:04 compute-0 podman[86248]: 2026-02-02 11:12:04.268026979 +0000 UTC m=+0.110498174 container start 22f7a2843cfc6d38bf9bb4c61c84667fbb2adb71d29e34b64b6c4a98575dda3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_vaughan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:12:04 compute-0 podman[86248]: 2026-02-02 11:12:04.180610753 +0000 UTC m=+0.023081968 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:04 compute-0 jovial_vaughan[86268]: 167 167
Feb 02 11:12:04 compute-0 systemd[1]: libpod-22f7a2843cfc6d38bf9bb4c61c84667fbb2adb71d29e34b64b6c4a98575dda3c.scope: Deactivated successfully.
Feb 02 11:12:04 compute-0 podman[86248]: 2026-02-02 11:12:04.293449362 +0000 UTC m=+0.135920587 container attach 22f7a2843cfc6d38bf9bb4c61c84667fbb2adb71d29e34b64b6c4a98575dda3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:04 compute-0 podman[86248]: 2026-02-02 11:12:04.294349607 +0000 UTC m=+0.136820802 container died 22f7a2843cfc6d38bf9bb4c61c84667fbb2adb71d29e34b64b6c4a98575dda3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_vaughan, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 11:12:04 compute-0 podman[86248]: 2026-02-02 11:12:04.333211693 +0000 UTC m=+0.175682888 container remove 22f7a2843cfc6d38bf9bb4c61c84667fbb2adb71d29e34b64b6c4a98575dda3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:04 compute-0 systemd[1]: libpod-conmon-22f7a2843cfc6d38bf9bb4c61c84667fbb2adb71d29e34b64b6c4a98575dda3c.scope: Deactivated successfully.
Feb 02 11:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-479204b0f2ef338b0c2e5852952745d3ecee80fb98597c6910d81bd358db98b0-merged.mount: Deactivated successfully.
Feb 02 11:12:04 compute-0 podman[86306]: 2026-02-02 11:12:04.455489621 +0000 UTC m=+0.044594269 container create 36c4b66c8335b88c2f53fd2ed9aeaecaa97ca772f3ca6cffa2e07758fa56ee9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:12:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 11:12:04 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/544312461' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:04 compute-0 systemd[1]: Started libpod-conmon-36c4b66c8335b88c2f53fd2ed9aeaecaa97ca772f3ca6cffa2e07758fa56ee9e.scope.
Feb 02 11:12:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee3385b3395224bacf7fc24d73269be6837c6d6f7ae9184fa9a8943b90688f78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee3385b3395224bacf7fc24d73269be6837c6d6f7ae9184fa9a8943b90688f78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee3385b3395224bacf7fc24d73269be6837c6d6f7ae9184fa9a8943b90688f78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee3385b3395224bacf7fc24d73269be6837c6d6f7ae9184fa9a8943b90688f78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:04 compute-0 podman[86306]: 2026-02-02 11:12:04.432888868 +0000 UTC m=+0.021993576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:04 compute-0 podman[86306]: 2026-02-02 11:12:04.560712895 +0000 UTC m=+0.149817573 container init 36c4b66c8335b88c2f53fd2ed9aeaecaa97ca772f3ca6cffa2e07758fa56ee9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:12:04 compute-0 podman[86306]: 2026-02-02 11:12:04.567505318 +0000 UTC m=+0.156609956 container start 36c4b66c8335b88c2f53fd2ed9aeaecaa97ca772f3ca6cffa2e07758fa56ee9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:12:04 compute-0 podman[86306]: 2026-02-02 11:12:04.578339196 +0000 UTC m=+0.167443864 container attach 36c4b66c8335b88c2f53fd2ed9aeaecaa97ca772f3ca6cffa2e07758fa56ee9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:04 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zebspe started
Feb 02 11:12:04 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mgr.compute-2.zebspe 192.168.122.102:0/3682169059; not ready for session (expect reconnect)
Feb 02 11:12:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v65: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Feb 02 11:12:05 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/544312461' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:05 compute-0 ceph-mon[74676]: Standby manager daemon compute-2.zebspe started
Feb 02 11:12:05 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/544312461' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Feb 02 11:12:05 compute-0 goofy_shtern[86242]: pool 'cephfs.cephfs.meta' created
Feb 02 11:12:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Feb 02 11:12:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.dhyzzj(active, since 107s), standbys: compute-2.zebspe
Feb 02 11:12:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"} v 0)
Feb 02 11:12:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"}]: dispatch
Feb 02 11:12:05 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:05 compute-0 systemd[1]: libpod-aee38123ffcbb28051101b72cb54ab30754c2c96f2004502da39cb2869f45275.scope: Deactivated successfully.
Feb 02 11:12:05 compute-0 podman[86203]: 2026-02-02 11:12:05.115439055 +0000 UTC m=+1.091091618 container died aee38123ffcbb28051101b72cb54ab30754c2c96f2004502da39cb2869f45275 (image=quay.io/ceph/ceph:v19, name=goofy_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-873b5196e1755f07b23fdfbe2f474a49f52de21bb5ef0651f33951be0b885259-merged.mount: Deactivated successfully.
Feb 02 11:12:05 compute-0 lvm[86408]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:12:05 compute-0 lvm[86408]: VG ceph_vg0 finished
Feb 02 11:12:05 compute-0 podman[86203]: 2026-02-02 11:12:05.179658772 +0000 UTC m=+1.155311335 container remove aee38123ffcbb28051101b72cb54ab30754c2c96f2004502da39cb2869f45275 (image=quay.io/ceph/ceph:v19, name=goofy_shtern, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:12:05 compute-0 systemd[1]: libpod-conmon-aee38123ffcbb28051101b72cb54ab30754c2c96f2004502da39cb2869f45275.scope: Deactivated successfully.
Feb 02 11:12:05 compute-0 sudo[86188]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:05 compute-0 exciting_napier[86326]: {}
Feb 02 11:12:05 compute-0 systemd[1]: libpod-36c4b66c8335b88c2f53fd2ed9aeaecaa97ca772f3ca6cffa2e07758fa56ee9e.scope: Deactivated successfully.
Feb 02 11:12:05 compute-0 podman[86306]: 2026-02-02 11:12:05.261945003 +0000 UTC m=+0.851049651 container died 36c4b66c8335b88c2f53fd2ed9aeaecaa97ca772f3ca6cffa2e07758fa56ee9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.iybsjv started
Feb 02 11:12:05 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from mgr.compute-1.iybsjv 192.168.122.101:0/950782837; not ready for session (expect reconnect)
Feb 02 11:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee3385b3395224bacf7fc24d73269be6837c6d6f7ae9184fa9a8943b90688f78-merged.mount: Deactivated successfully.
Feb 02 11:12:05 compute-0 podman[86306]: 2026-02-02 11:12:05.317433411 +0000 UTC m=+0.906538059 container remove 36c4b66c8335b88c2f53fd2ed9aeaecaa97ca772f3ca6cffa2e07758fa56ee9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:05 compute-0 sudo[86454]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynnscljqngnebybbxkbzonerwfvjyxei ; /usr/bin/python3'
Feb 02 11:12:05 compute-0 systemd[1]: libpod-conmon-36c4b66c8335b88c2f53fd2ed9aeaecaa97ca772f3ca6cffa2e07758fa56ee9e.scope: Deactivated successfully.
Feb 02 11:12:05 compute-0 sudo[86454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:05 compute-0 sudo[86140]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:12:05 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:12:05 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:12:05 compute-0 python3[86456]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:05 compute-0 podman[86457]: 2026-02-02 11:12:05.560850526 +0000 UTC m=+0.043044826 container create f634199d740f28f390cb83296021d714f8a60fdd7daecb0f8b32b0c77254ddf3 (image=quay.io/ceph/ceph:v19, name=romantic_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:05 compute-0 podman[86457]: 2026-02-02 11:12:05.54203103 +0000 UTC m=+0.024225360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:05 compute-0 systemd[1]: Started libpod-conmon-f634199d740f28f390cb83296021d714f8a60fdd7daecb0f8b32b0c77254ddf3.scope.
Feb 02 11:12:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35bdce4dfcdac665d25783d610f9b6a48018b719c1a9552231fc761c4ad1fe1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35bdce4dfcdac665d25783d610f9b6a48018b719c1a9552231fc761c4ad1fe1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:05 compute-0 podman[86457]: 2026-02-02 11:12:05.691577485 +0000 UTC m=+0.173771805 container init f634199d740f28f390cb83296021d714f8a60fdd7daecb0f8b32b0c77254ddf3 (image=quay.io/ceph/ceph:v19, name=romantic_dirac, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:12:05 compute-0 podman[86457]: 2026-02-02 11:12:05.701427315 +0000 UTC m=+0.183621615 container start f634199d740f28f390cb83296021d714f8a60fdd7daecb0f8b32b0c77254ddf3 (image=quay.io/ceph/ceph:v19, name=romantic_dirac, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:12:05 compute-0 podman[86457]: 2026-02-02 11:12:05.704706298 +0000 UTC m=+0.186900618 container attach f634199d740f28f390cb83296021d714f8a60fdd7daecb0f8b32b0c77254ddf3 (image=quay.io/ceph/ceph:v19, name=romantic_dirac, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 11:12:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/462396198' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Feb 02 11:12:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/462396198' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Feb 02 11:12:06 compute-0 romantic_dirac[86473]: pool 'cephfs.cephfs.data' created
Feb 02 11:12:06 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Feb 02 11:12:06 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 20 pg[7.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:06 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:06 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:06 compute-0 systemd[1]: libpod-f634199d740f28f390cb83296021d714f8a60fdd7daecb0f8b32b0c77254ddf3.scope: Deactivated successfully.
Feb 02 11:12:06 compute-0 podman[86457]: 2026-02-02 11:12:06.126455816 +0000 UTC m=+0.608650116 container died f634199d740f28f390cb83296021d714f8a60fdd7daecb0f8b32b0c77254ddf3 (image=quay.io/ceph/ceph:v19, name=romantic_dirac, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:06 compute-0 ceph-mon[74676]: pgmap v65: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:06 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/544312461' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:06 compute-0 ceph-mon[74676]: osdmap e19: 3 total, 2 up, 3 in
Feb 02 11:12:06 compute-0 ceph-mon[74676]: mgrmap e10: compute-0.dhyzzj(active, since 107s), standbys: compute-2.zebspe
Feb 02 11:12:06 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:06 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"}]: dispatch
Feb 02 11:12:06 compute-0 ceph-mon[74676]: Standby manager daemon compute-1.iybsjv started
Feb 02 11:12:06 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:06 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:06 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/462396198' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb 02 11:12:06 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.dhyzzj(active, since 108s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"} v 0)
Feb 02 11:12:06 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"}]: dispatch
Feb 02 11:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b35bdce4dfcdac665d25783d610f9b6a48018b719c1a9552231fc761c4ad1fe1-merged.mount: Deactivated successfully.
Feb 02 11:12:06 compute-0 podman[86457]: 2026-02-02 11:12:06.188033727 +0000 UTC m=+0.670228027 container remove f634199d740f28f390cb83296021d714f8a60fdd7daecb0f8b32b0c77254ddf3 (image=quay.io/ceph/ceph:v19, name=romantic_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:06 compute-0 systemd[1]: libpod-conmon-f634199d740f28f390cb83296021d714f8a60fdd7daecb0f8b32b0c77254ddf3.scope: Deactivated successfully.
Feb 02 11:12:06 compute-0 sudo[86454]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:06 compute-0 sudo[86534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxwpaswmvbntsdsmvqaptmditlyqkedn ; /usr/bin/python3'
Feb 02 11:12:06 compute-0 sudo[86534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:06 compute-0 python3[86536]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:06 compute-0 podman[86537]: 2026-02-02 11:12:06.549672925 +0000 UTC m=+0.044042824 container create 1272dfa24abd3d7d0e130b2f0349d297b6705b91869ab2c0989cff6571b996d9 (image=quay.io/ceph/ceph:v19, name=interesting_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:06 compute-0 systemd[1]: Started libpod-conmon-1272dfa24abd3d7d0e130b2f0349d297b6705b91869ab2c0989cff6571b996d9.scope.
Feb 02 11:12:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2371f93a18e0836b6d17b7b13cb2ab0c427e2abe70b5d2a038363b6d97161c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2371f93a18e0836b6d17b7b13cb2ab0c427e2abe70b5d2a038363b6d97161c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:06 compute-0 podman[86537]: 2026-02-02 11:12:06.528196784 +0000 UTC m=+0.022566703 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:06 compute-0 podman[86537]: 2026-02-02 11:12:06.632720357 +0000 UTC m=+0.127090276 container init 1272dfa24abd3d7d0e130b2f0349d297b6705b91869ab2c0989cff6571b996d9 (image=quay.io/ceph/ceph:v19, name=interesting_tu, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:12:06 compute-0 podman[86537]: 2026-02-02 11:12:06.638112651 +0000 UTC m=+0.132482560 container start 1272dfa24abd3d7d0e130b2f0349d297b6705b91869ab2c0989cff6571b996d9 (image=quay.io/ceph/ceph:v19, name=interesting_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Feb 02 11:12:06 compute-0 podman[86537]: 2026-02-02 11:12:06.662186396 +0000 UTC m=+0.156556295 container attach 1272dfa24abd3d7d0e130b2f0349d297b6705b91869ab2c0989cff6571b996d9 (image=quay.io/ceph/ceph:v19, name=interesting_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:12:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Feb 02 11:12:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3479287648' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Feb 02 11:12:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Feb 02 11:12:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3479287648' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb 02 11:12:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Feb 02 11:12:07 compute-0 interesting_tu[86553]: enabled application 'rbd' on pool 'vms'
Feb 02 11:12:07 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Feb 02 11:12:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:07 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:07 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:07 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:12:07 compute-0 systemd[1]: libpod-1272dfa24abd3d7d0e130b2f0349d297b6705b91869ab2c0989cff6571b996d9.scope: Deactivated successfully.
Feb 02 11:12:07 compute-0 podman[86537]: 2026-02-02 11:12:07.16152361 +0000 UTC m=+0.655893529 container died 1272dfa24abd3d7d0e130b2f0349d297b6705b91869ab2c0989cff6571b996d9 (image=quay.io/ceph/ceph:v19, name=interesting_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:07 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/462396198' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 11:12:07 compute-0 ceph-mon[74676]: osdmap e20: 3 total, 2 up, 3 in
Feb 02 11:12:07 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:07 compute-0 ceph-mon[74676]: mgrmap e11: compute-0.dhyzzj(active, since 108s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:07 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"}]: dispatch
Feb 02 11:12:07 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3479287648' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Feb 02 11:12:07 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3479287648' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb 02 11:12:07 compute-0 ceph-mon[74676]: osdmap e21: 3 total, 2 up, 3 in
Feb 02 11:12:07 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:07 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 21 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e2371f93a18e0836b6d17b7b13cb2ab0c427e2abe70b5d2a038363b6d97161c-merged.mount: Deactivated successfully.
Feb 02 11:12:07 compute-0 podman[86537]: 2026-02-02 11:12:07.398585454 +0000 UTC m=+0.892955353 container remove 1272dfa24abd3d7d0e130b2f0349d297b6705b91869ab2c0989cff6571b996d9 (image=quay.io/ceph/ceph:v19, name=interesting_tu, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:12:07 compute-0 systemd[1]: libpod-conmon-1272dfa24abd3d7d0e130b2f0349d297b6705b91869ab2c0989cff6571b996d9.scope: Deactivated successfully.
Feb 02 11:12:07 compute-0 sudo[86534]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:07 compute-0 sudo[86615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voezbsfzzwlvaqaohwqfskaexutqtkbg ; /usr/bin/python3'
Feb 02 11:12:07 compute-0 sudo[86615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:07 compute-0 python3[86617]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:07 compute-0 podman[86618]: 2026-02-02 11:12:07.740948263 +0000 UTC m=+0.037946970 container create 0a1614ffb277a97dafb43e7be067c49ffe29cc4ac6d0147d475adbe5fb658870 (image=quay.io/ceph/ceph:v19, name=compassionate_sammet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:12:07 compute-0 systemd[1]: Started libpod-conmon-0a1614ffb277a97dafb43e7be067c49ffe29cc4ac6d0147d475adbe5fb658870.scope.
Feb 02 11:12:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6026a507915a8076011be84eb16dd757b31fb3d23131b7c7dd3997ad60164a50/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6026a507915a8076011be84eb16dd757b31fb3d23131b7c7dd3997ad60164a50/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:07 compute-0 podman[86618]: 2026-02-02 11:12:07.723136627 +0000 UTC m=+0.020135364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:07 compute-0 podman[86618]: 2026-02-02 11:12:07.881385978 +0000 UTC m=+0.178384715 container init 0a1614ffb277a97dafb43e7be067c49ffe29cc4ac6d0147d475adbe5fb658870 (image=quay.io/ceph/ceph:v19, name=compassionate_sammet, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:07 compute-0 podman[86618]: 2026-02-02 11:12:07.88638469 +0000 UTC m=+0.183383397 container start 0a1614ffb277a97dafb43e7be067c49ffe29cc4ac6d0147d475adbe5fb658870 (image=quay.io/ceph/ceph:v19, name=compassionate_sammet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:12:07 compute-0 podman[86618]: 2026-02-02 11:12:07.94824154 +0000 UTC m=+0.245240267 container attach 0a1614ffb277a97dafb43e7be067c49ffe29cc4ac6d0147d475adbe5fb658870 (image=quay.io/ceph/ceph:v19, name=compassionate_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Feb 02 11:12:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1196688495' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Feb 02 11:12:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Feb 02 11:12:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:09 compute-0 ceph-mon[74676]: pgmap v68: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:09 compute-0 ceph-mon[74676]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:12:09 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1196688495' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Feb 02 11:12:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1196688495' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb 02 11:12:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Feb 02 11:12:12 compute-0 compassionate_sammet[86633]: enabled application 'rbd' on pool 'volumes'
Feb 02 11:12:12 compute-0 systemd[1]: libpod-0a1614ffb277a97dafb43e7be067c49ffe29cc4ac6d0147d475adbe5fb658870.scope: Deactivated successfully.
Feb 02 11:12:12 compute-0 podman[86618]: 2026-02-02 11:12:12.296595607 +0000 UTC m=+4.593594324 container died 0a1614ffb277a97dafb43e7be067c49ffe29cc4ac6d0147d475adbe5fb658870 (image=quay.io/ceph/ceph:v19, name=compassionate_sammet, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:12 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Feb 02 11:12:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:12 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:12 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6026a507915a8076011be84eb16dd757b31fb3d23131b7c7dd3997ad60164a50-merged.mount: Deactivated successfully.
Feb 02 11:12:12 compute-0 podman[86618]: 2026-02-02 11:12:12.623912058 +0000 UTC m=+4.920910765 container remove 0a1614ffb277a97dafb43e7be067c49ffe29cc4ac6d0147d475adbe5fb658870 (image=quay.io/ceph/ceph:v19, name=compassionate_sammet, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:12:12 compute-0 systemd[1]: libpod-conmon-0a1614ffb277a97dafb43e7be067c49ffe29cc4ac6d0147d475adbe5fb658870.scope: Deactivated successfully.
Feb 02 11:12:12 compute-0 sudo[86615]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:12 compute-0 sudo[86695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aufjrgtofserghutbgvkfalnzmqldtvx ; /usr/bin/python3'
Feb 02 11:12:12 compute-0 sudo[86695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:12 compute-0 python3[86697]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:13 compute-0 podman[86698]: 2026-02-02 11:12:12.938395994 +0000 UTC m=+0.018214939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:13 compute-0 podman[86698]: 2026-02-02 11:12:13.085045796 +0000 UTC m=+0.164864741 container create 3a1d40047566f50a58e001adb6afbef5cd87ac120a2bd771240e17ac1b78e6e6 (image=quay.io/ceph/ceph:v19, name=magical_dirac, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:12:13 compute-0 systemd[1]: Started libpod-conmon-3a1d40047566f50a58e001adb6afbef5cd87ac120a2bd771240e17ac1b78e6e6.scope.
Feb 02 11:12:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bccbb4597acca56181da332dfbdb1931ff591e52599b495d34e2ed021a5975/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bccbb4597acca56181da332dfbdb1931ff591e52599b495d34e2ed021a5975/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:13 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:12:13 compute-0 podman[86698]: 2026-02-02 11:12:13.371923677 +0000 UTC m=+0.451742622 container init 3a1d40047566f50a58e001adb6afbef5cd87ac120a2bd771240e17ac1b78e6e6 (image=quay.io/ceph/ceph:v19, name=magical_dirac, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:13 compute-0 podman[86698]: 2026-02-02 11:12:13.376218409 +0000 UTC m=+0.456037334 container start 3a1d40047566f50a58e001adb6afbef5cd87ac120a2bd771240e17ac1b78e6e6 (image=quay.io/ceph/ceph:v19, name=magical_dirac, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:13 compute-0 podman[86698]: 2026-02-02 11:12:13.416169556 +0000 UTC m=+0.495988501 container attach 3a1d40047566f50a58e001adb6afbef5cd87ac120a2bd771240e17ac1b78e6e6 (image=quay.io/ceph/ceph:v19, name=magical_dirac, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:12:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Feb 02 11:12:13 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3053165917' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Feb 02 11:12:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Feb 02 11:12:14 compute-0 ceph-mon[74676]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3053165917' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb 02 11:12:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Feb 02 11:12:17 compute-0 magical_dirac[86713]: enabled application 'rbd' on pool 'backups'
Feb 02 11:12:17 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Feb 02 11:12:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:17 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:17 compute-0 systemd[1]: libpod-3a1d40047566f50a58e001adb6afbef5cd87ac120a2bd771240e17ac1b78e6e6.scope: Deactivated successfully.
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:17 compute-0 podman[86698]: 2026-02-02 11:12:17.043522314 +0000 UTC m=+4.123341229 container died 3a1d40047566f50a58e001adb6afbef5cd87ac120a2bd771240e17ac1b78e6e6 (image=quay.io/ceph/ceph:v19, name=magical_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-51bccbb4597acca56181da332dfbdb1931ff591e52599b495d34e2ed021a5975-merged.mount: Deactivated successfully.
Feb 02 11:12:17 compute-0 podman[86698]: 2026-02-02 11:12:17.197902806 +0000 UTC m=+4.277721731 container remove 3a1d40047566f50a58e001adb6afbef5cd87ac120a2bd771240e17ac1b78e6e6 (image=quay.io/ceph/ceph:v19, name=magical_dirac, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:17 compute-0 systemd[1]: libpod-conmon-3a1d40047566f50a58e001adb6afbef5cd87ac120a2bd771240e17ac1b78e6e6.scope: Deactivated successfully.
Feb 02 11:12:17 compute-0 sudo[86695]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:17 compute-0 sudo[86774]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnqvdqluhtysxxeufrifadejawpepcvn ; /usr/bin/python3'
Feb 02 11:12:17 compute-0 sudo[86774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:17 compute-0 ceph-mon[74676]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:17 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1196688495' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb 02 11:12:17 compute-0 ceph-mon[74676]: osdmap e22: 3 total, 2 up, 3 in
Feb 02 11:12:17 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:17 compute-0 ceph-mon[74676]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:17 compute-0 ceph-mon[74676]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:12:17 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3053165917' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Feb 02 11:12:17 compute-0 ceph-mon[74676]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:17 compute-0 python3[86776]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb 02 11:12:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Feb 02 11:12:17 compute-0 podman[86777]: 2026-02-02 11:12:17.510315213 +0000 UTC m=+0.035997805 container create 4746dcecb4f71e794f570428340af7437a2813fa748edc91319d66b2d1614ff3 (image=quay.io/ceph/ceph:v19, name=epic_hermann, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:12:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:12:17 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Feb 02 11:12:17 compute-0 systemd[1]: Started libpod-conmon-4746dcecb4f71e794f570428340af7437a2813fa748edc91319d66b2d1614ff3.scope.
Feb 02 11:12:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6c92cd433e99e91dca0ea222776819164bf8052c9b2714ea01ab5fbdc945b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6c92cd433e99e91dca0ea222776819164bf8052c9b2714ea01ab5fbdc945b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:17 compute-0 podman[86777]: 2026-02-02 11:12:17.570190546 +0000 UTC m=+0.095873148 container init 4746dcecb4f71e794f570428340af7437a2813fa748edc91319d66b2d1614ff3 (image=quay.io/ceph/ceph:v19, name=epic_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:17 compute-0 podman[86777]: 2026-02-02 11:12:17.577050861 +0000 UTC m=+0.102733443 container start 4746dcecb4f71e794f570428340af7437a2813fa748edc91319d66b2d1614ff3 (image=quay.io/ceph/ceph:v19, name=epic_hermann, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb 02 11:12:17 compute-0 podman[86777]: 2026-02-02 11:12:17.582014123 +0000 UTC m=+0.107696735 container attach 4746dcecb4f71e794f570428340af7437a2813fa748edc91319d66b2d1614ff3 (image=quay.io/ceph/ceph:v19, name=epic_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:12:17 compute-0 podman[86777]: 2026-02-02 11:12:17.494674688 +0000 UTC m=+0.020357320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:12:17
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'vms', 'volumes']
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:12:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:12:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:12:17 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:12:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Feb 02 11:12:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/90308940' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Feb 02 11:12:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Feb 02 11:12:18 compute-0 ceph-mon[74676]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:18 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3053165917' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb 02 11:12:18 compute-0 ceph-mon[74676]: osdmap e23: 3 total, 2 up, 3 in
Feb 02 11:12:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Feb 02 11:12:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:18 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:18 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/90308940' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Feb 02 11:12:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/90308940' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb 02 11:12:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Feb 02 11:12:18 compute-0 epic_hermann[86792]: enabled application 'rbd' on pool 'images'
Feb 02 11:12:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Feb 02 11:12:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:18 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:18 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:18 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev ca29aad3-9d23-4547-8bfa-56092f237abd (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb 02 11:12:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:12:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:18 compute-0 systemd[1]: libpod-4746dcecb4f71e794f570428340af7437a2813fa748edc91319d66b2d1614ff3.scope: Deactivated successfully.
Feb 02 11:12:18 compute-0 podman[86777]: 2026-02-02 11:12:18.3809346 +0000 UTC m=+0.906617192 container died 4746dcecb4f71e794f570428340af7437a2813fa748edc91319d66b2d1614ff3 (image=quay.io/ceph/ceph:v19, name=epic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d6c92cd433e99e91dca0ea222776819164bf8052c9b2714ea01ab5fbdc945b8-merged.mount: Deactivated successfully.
Feb 02 11:12:18 compute-0 podman[86777]: 2026-02-02 11:12:18.415306178 +0000 UTC m=+0.940988770 container remove 4746dcecb4f71e794f570428340af7437a2813fa748edc91319d66b2d1614ff3 (image=quay.io/ceph/ceph:v19, name=epic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:18 compute-0 systemd[1]: libpod-conmon-4746dcecb4f71e794f570428340af7437a2813fa748edc91319d66b2d1614ff3.scope: Deactivated successfully.
Feb 02 11:12:18 compute-0 sudo[86774]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:18 compute-0 sudo[86852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-behwifqlqxmmwfhwfeqvkdibzubmonvv ; /usr/bin/python3'
Feb 02 11:12:18 compute-0 sudo[86852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:18 compute-0 python3[86854]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:18 compute-0 podman[86855]: 2026-02-02 11:12:18.746146278 +0000 UTC m=+0.042871530 container create 4f13f051b1359e52c7b4313ae27afdeede5c76850c823554bb4e1640df0604cc (image=quay.io/ceph/ceph:v19, name=epic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:18 compute-0 systemd[1]: Started libpod-conmon-4f13f051b1359e52c7b4313ae27afdeede5c76850c823554bb4e1640df0604cc.scope.
Feb 02 11:12:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177ab38453455fe1d885d980ba6e04a76c5e38f0922a4268ad508f3f78fead34/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177ab38453455fe1d885d980ba6e04a76c5e38f0922a4268ad508f3f78fead34/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:18 compute-0 podman[86855]: 2026-02-02 11:12:18.818937489 +0000 UTC m=+0.115662771 container init 4f13f051b1359e52c7b4313ae27afdeede5c76850c823554bb4e1640df0604cc (image=quay.io/ceph/ceph:v19, name=epic_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:18 compute-0 podman[86855]: 2026-02-02 11:12:18.822928982 +0000 UTC m=+0.119654234 container start 4f13f051b1359e52c7b4313ae27afdeede5c76850c823554bb4e1640df0604cc (image=quay.io/ceph/ceph:v19, name=epic_lalande, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:18 compute-0 podman[86855]: 2026-02-02 11:12:18.727895609 +0000 UTC m=+0.024620891 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:18 compute-0 podman[86855]: 2026-02-02 11:12:18.826448603 +0000 UTC m=+0.123173855 container attach 4f13f051b1359e52c7b4313ae27afdeede5c76850c823554bb4e1640df0604cc (image=quay.io/ceph/ceph:v19, name=epic_lalande, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:12:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Feb 02 11:12:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/449568584' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Feb 02 11:12:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Feb 02 11:12:19 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:12:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/449568584' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb 02 11:12:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Feb 02 11:12:19 compute-0 epic_lalande[86871]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Feb 02 11:12:19 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Feb 02 11:12:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:19 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:19 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:19 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev c974965b-e10b-4b7a-a75f-c4f0d5ab2ace (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb 02 11:12:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:12:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:19 compute-0 ceph-mon[74676]: Deploying daemon osd.2 on compute-2
Feb 02 11:12:19 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:19 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/90308940' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb 02 11:12:19 compute-0 ceph-mon[74676]: osdmap e24: 3 total, 2 up, 3 in
Feb 02 11:12:19 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:19 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:19 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:19 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/449568584' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Feb 02 11:12:19 compute-0 systemd[1]: libpod-4f13f051b1359e52c7b4313ae27afdeede5c76850c823554bb4e1640df0604cc.scope: Deactivated successfully.
Feb 02 11:12:19 compute-0 conmon[86871]: conmon 4f13f051b1359e52c7b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f13f051b1359e52c7b4313ae27afdeede5c76850c823554bb4e1640df0604cc.scope/container/memory.events
Feb 02 11:12:19 compute-0 podman[86855]: 2026-02-02 11:12:19.393684499 +0000 UTC m=+0.690409751 container died 4f13f051b1359e52c7b4313ae27afdeede5c76850c823554bb4e1640df0604cc (image=quay.io/ceph/ceph:v19, name=epic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-177ab38453455fe1d885d980ba6e04a76c5e38f0922a4268ad508f3f78fead34-merged.mount: Deactivated successfully.
Feb 02 11:12:19 compute-0 podman[86855]: 2026-02-02 11:12:19.426513193 +0000 UTC m=+0.723238445 container remove 4f13f051b1359e52c7b4313ae27afdeede5c76850c823554bb4e1640df0604cc (image=quay.io/ceph/ceph:v19, name=epic_lalande, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 11:12:19 compute-0 systemd[1]: libpod-conmon-4f13f051b1359e52c7b4313ae27afdeede5c76850c823554bb4e1640df0604cc.scope: Deactivated successfully.
Feb 02 11:12:19 compute-0 sudo[86852]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 25 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=25 pruub=13.937191010s) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active pruub 71.753944397s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 25 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=25 pruub=13.937191010s) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown pruub 71.753944397s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:19 compute-0 sudo[86932]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfbyeuviwvqqibmjrhrvffrqiqthkvsu ; /usr/bin/python3'
Feb 02 11:12:19 compute-0 sudo[86932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:19 compute-0 python3[86934]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:19 compute-0 podman[86935]: 2026-02-02 11:12:19.746669531 +0000 UTC m=+0.035887802 container create f1438d4e8649d1ae6dc26fcf5cbb7238bfb5174258d2ed9c8092adc0f66d5b93 (image=quay.io/ceph/ceph:v19, name=gracious_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 11:12:19 compute-0 systemd[1]: Started libpod-conmon-f1438d4e8649d1ae6dc26fcf5cbb7238bfb5174258d2ed9c8092adc0f66d5b93.scope.
Feb 02 11:12:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce950750c9b9ba7d56f94d8eced7cb985418f8388b68dca05f05d63323a3de17/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce950750c9b9ba7d56f94d8eced7cb985418f8388b68dca05f05d63323a3de17/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:19 compute-0 podman[86935]: 2026-02-02 11:12:19.819251975 +0000 UTC m=+0.108470246 container init f1438d4e8649d1ae6dc26fcf5cbb7238bfb5174258d2ed9c8092adc0f66d5b93 (image=quay.io/ceph/ceph:v19, name=gracious_heisenberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:12:19 compute-0 podman[86935]: 2026-02-02 11:12:19.826612265 +0000 UTC m=+0.115830526 container start f1438d4e8649d1ae6dc26fcf5cbb7238bfb5174258d2ed9c8092adc0f66d5b93 (image=quay.io/ceph/ceph:v19, name=gracious_heisenberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:19 compute-0 podman[86935]: 2026-02-02 11:12:19.73259998 +0000 UTC m=+0.021818261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:19 compute-0 podman[86935]: 2026-02-02 11:12:19.830264079 +0000 UTC m=+0.119482540 container attach f1438d4e8649d1ae6dc26fcf5cbb7238bfb5174258d2ed9c8092adc0f66d5b93 (image=quay.io/ceph/ceph:v19, name=gracious_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:12:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Feb 02 11:12:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2891260749' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Feb 02 11:12:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Feb 02 11:12:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2891260749' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb 02 11:12:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Feb 02 11:12:20 compute-0 gracious_heisenberg[86950]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Feb 02 11:12:20 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Feb 02 11:12:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:20 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:20 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:20 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev af8eb2b4-f621-42fc-ad41-b46c07224365 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb 02 11:12:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:12:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1d( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1f( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1e( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1c( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1b( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.a( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.9( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.8( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.7( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.6( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.4( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.2( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.5( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.3( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.b( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.c( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.d( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.e( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.f( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.10( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.11( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.12( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.14( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.13( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.17( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.16( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.15( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.18( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.19( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1a( empty local-lis/les=14/15 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:20 compute-0 ceph-mon[74676]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:20 compute-0 ceph-mon[74676]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:12:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:20 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/449568584' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb 02 11:12:20 compute-0 ceph-mon[74676]: osdmap e25: 3 total, 2 up, 3 in
Feb 02 11:12:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:20 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2891260749' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Feb 02 11:12:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:20 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2891260749' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb 02 11:12:20 compute-0 ceph-mon[74676]: osdmap e26: 3 total, 2 up, 3 in
Feb 02 11:12:20 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1d( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1b( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 systemd[1]: libpod-f1438d4e8649d1ae6dc26fcf5cbb7238bfb5174258d2ed9c8092adc0f66d5b93.scope: Deactivated successfully.
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1e( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1c( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.a( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1f( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.9( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.8( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.7( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.6( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.2( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.5( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.0( empty local-lis/les=25/26 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.3( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.b( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.d( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.e( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.4( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.f( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.10( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.c( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.11( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.12( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.14( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.13( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.17( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.16( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.18( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.1a( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.19( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 26 pg[2.15( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [1] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:20 compute-0 podman[86975]: 2026-02-02 11:12:20.444006838 +0000 UTC m=+0.027269517 container died f1438d4e8649d1ae6dc26fcf5cbb7238bfb5174258d2ed9c8092adc0f66d5b93 (image=quay.io/ceph/ceph:v19, name=gracious_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce950750c9b9ba7d56f94d8eced7cb985418f8388b68dca05f05d63323a3de17-merged.mount: Deactivated successfully.
Feb 02 11:12:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:12:20 compute-0 podman[86975]: 2026-02-02 11:12:20.478454518 +0000 UTC m=+0.061717177 container remove f1438d4e8649d1ae6dc26fcf5cbb7238bfb5174258d2ed9c8092adc0f66d5b93 (image=quay.io/ceph/ceph:v19, name=gracious_heisenberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:12:20 compute-0 systemd[1]: libpod-conmon-f1438d4e8649d1ae6dc26fcf5cbb7238bfb5174258d2ed9c8092adc0f66d5b93.scope: Deactivated successfully.
Feb 02 11:12:20 compute-0 sudo[86932]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:20 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Feb 02 11:12:20 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Feb 02 11:12:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:12:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:12:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v81: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:12:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:12:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:21 compute-0 python3[87064]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 11:12:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Feb 02 11:12:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Feb 02 11:12:21 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Feb 02 11:12:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:21 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:21 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev fb915943-6e6b-4bdc-838d-429f6bd64901 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb 02 11:12:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:12:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:21 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:21 compute-0 python3[87135]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770030741.0653968-37366-276183123828942/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:12:21 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Feb 02 11:12:21 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Feb 02 11:12:21 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Feb 02 11:12:21 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 02 11:12:21 compute-0 sudo[87235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwozzdxlqazmlvzwhqctdrmeqcxkqksc ; /usr/bin/python3'
Feb 02 11:12:21 compute-0 sudo[87235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:22 compute-0 python3[87237]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 11:12:22 compute-0 sudo[87235]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:22 compute-0 sudo[87310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glduiwmowmidtcqjdxfyllnqjqwymhbj ; /usr/bin/python3'
Feb 02 11:12:22 compute-0 sudo[87310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:22 compute-0 python3[87312]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770030741.8447366-37380-211113238063266/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=4c0a9dd1368fd30408ba4db30d22322c46257906 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:12:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Feb 02 11:12:22 compute-0 sudo[87310]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Feb 02 11:12:22 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Feb 02 11:12:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:22 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:22 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:22 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 8e00bf8e-e52d-44a0-9e11-240138589e71 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Feb 02 11:12:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:12:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:22 compute-0 ceph-mon[74676]: 2.1d scrub starts
Feb 02 11:12:22 compute-0 ceph-mon[74676]: 2.1d scrub ok
Feb 02 11:12:22 compute-0 ceph-mon[74676]: pgmap v81: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:22 compute-0 ceph-mon[74676]: osdmap e27: 3 total, 2 up, 3 in
Feb 02 11:12:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:22 compute-0 ceph-mon[74676]: 2.1b scrub starts
Feb 02 11:12:22 compute-0 ceph-mon[74676]: 2.1b scrub ok
Feb 02 11:12:22 compute-0 ceph-mon[74676]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Feb 02 11:12:22 compute-0 ceph-mon[74676]: Cluster is now healthy
Feb 02 11:12:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:22 compute-0 ceph-mon[74676]: osdmap e28: 3 total, 2 up, 3 in
Feb 02 11:12:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:22 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:12:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:12:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:12:22 compute-0 sudo[87360]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dixyhqrnkyirfpdganeqnxzspndtuyjv ; /usr/bin/python3'
Feb 02 11:12:22 compute-0 sudo[87360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:22 compute-0 sudo[87363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:12:22 compute-0 sudo[87363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:22 compute-0 sudo[87363]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:22 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Feb 02 11:12:22 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Feb 02 11:12:22 compute-0 python3[87362]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:22 compute-0 podman[87388]: 2026-02-02 11:12:22.776275294 +0000 UTC m=+0.048651485 container create 4fa8c6554bb3706547935363dfa004a7a7e768e5f42fbfe0b8df10cccd0e41b8 (image=quay.io/ceph/ceph:v19, name=pedantic_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 11:12:22 compute-0 sudo[87401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:22 compute-0 sudo[87401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:22 compute-0 sudo[87401]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:22 compute-0 systemd[76004]: Starting Mark boot as successful...
Feb 02 11:12:22 compute-0 systemd[76004]: Finished Mark boot as successful.
Feb 02 11:12:22 compute-0 systemd[1]: Started libpod-conmon-4fa8c6554bb3706547935363dfa004a7a7e768e5f42fbfe0b8df10cccd0e41b8.scope.
Feb 02 11:12:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72834fbd99596b03b35f129a72792c548723cc54f7e98e97d20e946bb6e3b412/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72834fbd99596b03b35f129a72792c548723cc54f7e98e97d20e946bb6e3b412/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72834fbd99596b03b35f129a72792c548723cc54f7e98e97d20e946bb6e3b412/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:22 compute-0 sudo[87427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:12:22 compute-0 sudo[87427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:22 compute-0 podman[87388]: 2026-02-02 11:12:22.748223686 +0000 UTC m=+0.020599897 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:22 compute-0 podman[87388]: 2026-02-02 11:12:22.850851776 +0000 UTC m=+0.123227977 container init 4fa8c6554bb3706547935363dfa004a7a7e768e5f42fbfe0b8df10cccd0e41b8 (image=quay.io/ceph/ceph:v19, name=pedantic_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:12:22 compute-0 podman[87388]: 2026-02-02 11:12:22.856247719 +0000 UTC m=+0.128623910 container start 4fa8c6554bb3706547935363dfa004a7a7e768e5f42fbfe0b8df10cccd0e41b8 (image=quay.io/ceph/ceph:v19, name=pedantic_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:12:22 compute-0 podman[87388]: 2026-02-02 11:12:22.887059936 +0000 UTC m=+0.159436127 container attach 4fa8c6554bb3706547935363dfa004a7a7e768e5f42fbfe0b8df10cccd0e41b8 (image=quay.io/ceph/ceph:v19, name=pedantic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:22 compute-0 ceph-mgr[74969]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Feb 02 11:12:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v84: 100 pgs: 93 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:12:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:12:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2819823762' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb 02 11:12:23 compute-0 sudo[87427]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2819823762' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 02 11:12:23 compute-0 pedantic_chaum[87447]: 
Feb 02 11:12:23 compute-0 pedantic_chaum[87447]: [global]
Feb 02 11:12:23 compute-0 pedantic_chaum[87447]:         fsid = 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:12:23 compute-0 pedantic_chaum[87447]:         mon_host = 192.168.122.100
Feb 02 11:12:23 compute-0 systemd[1]: libpod-4fa8c6554bb3706547935363dfa004a7a7e768e5f42fbfe0b8df10cccd0e41b8.scope: Deactivated successfully.
Feb 02 11:12:23 compute-0 podman[87388]: 2026-02-02 11:12:23.311375006 +0000 UTC m=+0.583751217 container died 4fa8c6554bb3706547935363dfa004a7a7e768e5f42fbfe0b8df10cccd0e41b8 (image=quay.io/ceph/ceph:v19, name=pedantic_chaum, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-72834fbd99596b03b35f129a72792c548723cc54f7e98e97d20e946bb6e3b412-merged.mount: Deactivated successfully.
Feb 02 11:12:23 compute-0 podman[87388]: 2026-02-02 11:12:23.392548456 +0000 UTC m=+0.664924647 container remove 4fa8c6554bb3706547935363dfa004a7a7e768e5f42fbfe0b8df10cccd0e41b8 (image=quay.io/ceph/ceph:v19, name=pedantic_chaum, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:12:23 compute-0 systemd[1]: libpod-conmon-4fa8c6554bb3706547935363dfa004a7a7e768e5f42fbfe0b8df10cccd0e41b8.scope: Deactivated successfully.
Feb 02 11:12:23 compute-0 sudo[87360]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Feb 02 11:12:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 0415c68f-c681-4564-8248-b54fe5ef9cbf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev ca29aad3-9d23-4547-8bfa-56092f237abd (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event ca29aad3-9d23-4547-8bfa-56092f237abd (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev c974965b-e10b-4b7a-a75f-c4f0d5ab2ace (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event c974965b-e10b-4b7a-a75f-c4f0d5ab2ace (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev af8eb2b4-f621-42fc-ad41-b46c07224365 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event af8eb2b4-f621-42fc-ad41-b46c07224365 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev fb915943-6e6b-4bdc-838d-429f6bd64901 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event fb915943-6e6b-4bdc-838d-429f6bd64901 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 8e00bf8e-e52d-44a0-9e11-240138589e71 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 8e00bf8e-e52d-44a0-9e11-240138589e71 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 0415c68f-c681-4564-8248-b54fe5ef9cbf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb 02 11:12:23 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 0415c68f-c681-4564-8248-b54fe5ef9cbf (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Feb 02 11:12:23 compute-0 sudo[87545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btrtalwrbdxtbdwprhtvixzkcpatbsge ; /usr/bin/python3'
Feb 02 11:12:23 compute-0 sudo[87545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:23 compute-0 ceph-mon[74676]: pgmap v84: 100 pgs: 93 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:23 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2819823762' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb 02 11:12:23 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2819823762' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 02 11:12:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:12:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:23 compute-0 ceph-mon[74676]: osdmap e29: 3 total, 2 up, 3 in
Feb 02 11:12:23 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:12:23 compute-0 python3[87547]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Feb 02 11:12:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:12:23 compute-0 podman[87548]: 2026-02-02 11:12:23.727834293 +0000 UTC m=+0.039007020 container create 440c72ce86a77eb1fbf72de45621517d2a5b459eb7ee76d61a2694cc927405aa (image=quay.io/ceph/ceph:v19, name=bold_archimedes, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:23 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Feb 02 11:12:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:12:23 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Feb 02 11:12:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:23 compute-0 systemd[1]: Started libpod-conmon-440c72ce86a77eb1fbf72de45621517d2a5b459eb7ee76d61a2694cc927405aa.scope.
Feb 02 11:12:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33939b014d07d5c3fb6c512982dc9b602f2efc3157852ba04391d413e48ca0b4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33939b014d07d5c3fb6c512982dc9b602f2efc3157852ba04391d413e48ca0b4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33939b014d07d5c3fb6c512982dc9b602f2efc3157852ba04391d413e48ca0b4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:23 compute-0 podman[87548]: 2026-02-02 11:12:23.71082781 +0000 UTC m=+0.022000537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:23 compute-0 podman[87548]: 2026-02-02 11:12:23.846479219 +0000 UTC m=+0.157651936 container init 440c72ce86a77eb1fbf72de45621517d2a5b459eb7ee76d61a2694cc927405aa (image=quay.io/ceph/ceph:v19, name=bold_archimedes, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:23 compute-0 podman[87548]: 2026-02-02 11:12:23.851818701 +0000 UTC m=+0.162991428 container start 440c72ce86a77eb1fbf72de45621517d2a5b459eb7ee76d61a2694cc927405aa (image=quay.io/ceph/ceph:v19, name=bold_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:12:23 compute-0 podman[87548]: 2026-02-02 11:12:23.874438594 +0000 UTC m=+0.185611331 container attach 440c72ce86a77eb1fbf72de45621517d2a5b459eb7ee76d61a2694cc927405aa (image=quay.io/ceph/ceph:v19, name=bold_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:12:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Feb 02 11:12:24 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1763341431' entity='client.admin' 
Feb 02 11:12:24 compute-0 bold_archimedes[87563]: set ssl_option
Feb 02 11:12:24 compute-0 systemd[1]: libpod-440c72ce86a77eb1fbf72de45621517d2a5b459eb7ee76d61a2694cc927405aa.scope: Deactivated successfully.
Feb 02 11:12:24 compute-0 podman[87548]: 2026-02-02 11:12:24.382520838 +0000 UTC m=+0.693693575 container died 440c72ce86a77eb1fbf72de45621517d2a5b459eb7ee76d61a2694cc927405aa (image=quay.io/ceph/ceph:v19, name=bold_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-33939b014d07d5c3fb6c512982dc9b602f2efc3157852ba04391d413e48ca0b4-merged.mount: Deactivated successfully.
Feb 02 11:12:24 compute-0 podman[87548]: 2026-02-02 11:12:24.422533946 +0000 UTC m=+0.733706673 container remove 440c72ce86a77eb1fbf72de45621517d2a5b459eb7ee76d61a2694cc927405aa (image=quay.io/ceph/ceph:v19, name=bold_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:12:24 compute-0 systemd[1]: libpod-conmon-440c72ce86a77eb1fbf72de45621517d2a5b459eb7ee76d61a2694cc927405aa.scope: Deactivated successfully.
Feb 02 11:12:24 compute-0 sudo[87545]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:24 compute-0 sudo[87623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gihknvdqxkumpcgqjaaiapjmhcrouyxy ; /usr/bin/python3'
Feb 02 11:12:24 compute-0 sudo[87623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:24 compute-0 ceph-mon[74676]: 2.1e deep-scrub starts
Feb 02 11:12:24 compute-0 ceph-mon[74676]: 2.1e deep-scrub ok
Feb 02 11:12:24 compute-0 ceph-mon[74676]: 3.18 scrub starts
Feb 02 11:12:24 compute-0 ceph-mon[74676]: 3.18 scrub ok
Feb 02 11:12:24 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:24 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:24 compute-0 ceph-mon[74676]: from='osd.2 [v2:192.168.122.102:6800/1439877520,v1:192.168.122.102:6801/1439877520]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Feb 02 11:12:24 compute-0 ceph-mon[74676]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Feb 02 11:12:24 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:24 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:24 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1763341431' entity='client.admin' 
Feb 02 11:12:24 compute-0 python3[87625]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Feb 02 11:12:24 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb 02 11:12:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Feb 02 11:12:24 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Feb 02 11:12:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:24 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:24 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Feb 02 11:12:24 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Feb 02 11:12:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e30 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Feb 02 11:12:24 compute-0 podman[87626]: 2026-02-02 11:12:24.726779181 +0000 UTC m=+0.038799465 container create fd245b5dad05944cd4aa60dcc9e6a09a406e329cef8a2ba4d13463588c1f6c01 (image=quay.io/ceph/ceph:v19, name=cool_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:12:24 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb 02 11:12:24 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb 02 11:12:24 compute-0 systemd[1]: Started libpod-conmon-fd245b5dad05944cd4aa60dcc9e6a09a406e329cef8a2ba4d13463588c1f6c01.scope.
Feb 02 11:12:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd9d9c1700e92d78f34d4d3b499daea3f18e5e4a6cd8eb583515a8fa88ddc39/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd9d9c1700e92d78f34d4d3b499daea3f18e5e4a6cd8eb583515a8fa88ddc39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd9d9c1700e92d78f34d4d3b499daea3f18e5e4a6cd8eb583515a8fa88ddc39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:24 compute-0 podman[87626]: 2026-02-02 11:12:24.70884376 +0000 UTC m=+0.020864074 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:24 compute-0 podman[87626]: 2026-02-02 11:12:24.911613499 +0000 UTC m=+0.223633803 container init fd245b5dad05944cd4aa60dcc9e6a09a406e329cef8a2ba4d13463588c1f6c01 (image=quay.io/ceph/ceph:v19, name=cool_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:24 compute-0 podman[87626]: 2026-02-02 11:12:24.916950031 +0000 UTC m=+0.228970315 container start fd245b5dad05944cd4aa60dcc9e6a09a406e329cef8a2ba4d13463588c1f6c01 (image=quay.io/ceph/ceph:v19, name=cool_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:12:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v87: 162 pgs: 62 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:12:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:25 compute-0 podman[87626]: 2026-02-02 11:12:25.044664194 +0000 UTC m=+0.356684488 container attach fd245b5dad05944cd4aa60dcc9e6a09a406e329cef8a2ba4d13463588c1f6c01 (image=quay.io/ceph/ceph:v19, name=cool_banzai, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:12:25 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:25 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb 02 11:12:25 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb 02 11:12:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 02 11:12:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:25 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Feb 02 11:12:25 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Feb 02 11:12:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb 02 11:12:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:25 compute-0 cool_banzai[87642]: Scheduled rgw.rgw update...
Feb 02 11:12:25 compute-0 cool_banzai[87642]: Scheduled ingress.rgw.default update...
Feb 02 11:12:25 compute-0 systemd[1]: libpod-fd245b5dad05944cd4aa60dcc9e6a09a406e329cef8a2ba4d13463588c1f6c01.scope: Deactivated successfully.
Feb 02 11:12:25 compute-0 podman[87626]: 2026-02-02 11:12:25.333098409 +0000 UTC m=+0.645118703 container died fd245b5dad05944cd4aa60dcc9e6a09a406e329cef8a2ba4d13463588c1f6c01 (image=quay.io/ceph/ceph:v19, name=cool_banzai, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddd9d9c1700e92d78f34d4d3b499daea3f18e5e4a6cd8eb583515a8fa88ddc39-merged.mount: Deactivated successfully.
Feb 02 11:12:25 compute-0 podman[87626]: 2026-02-02 11:12:25.368915438 +0000 UTC m=+0.680935722 container remove fd245b5dad05944cd4aa60dcc9e6a09a406e329cef8a2ba4d13463588c1f6c01 (image=quay.io/ceph/ceph:v19, name=cool_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:12:25 compute-0 systemd[1]: libpod-conmon-fd245b5dad05944cd4aa60dcc9e6a09a406e329cef8a2ba4d13463588c1f6c01.scope: Deactivated successfully.
Feb 02 11:12:25 compute-0 sudo[87623]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:12:25 compute-0 ceph-mon[74676]: 2.1c scrub starts
Feb 02 11:12:25 compute-0 ceph-mon[74676]: 2.1c scrub ok
Feb 02 11:12:25 compute-0 ceph-mon[74676]: 4.1f scrub starts
Feb 02 11:12:25 compute-0 ceph-mon[74676]: 4.1f scrub ok
Feb 02 11:12:25 compute-0 ceph-mon[74676]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb 02 11:12:25 compute-0 ceph-mon[74676]: from='osd.2 [v2:192.168.122.102:6800/1439877520,v1:192.168.122.102:6801/1439877520]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Feb 02 11:12:25 compute-0 ceph-mon[74676]: osdmap e30: 3 total, 2 up, 3 in
Feb 02 11:12:25 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:25 compute-0 ceph-mon[74676]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Feb 02 11:12:25 compute-0 ceph-mon[74676]: 3.16 scrub starts
Feb 02 11:12:25 compute-0 ceph-mon[74676]: 3.16 scrub ok
Feb 02 11:12:25 compute-0 ceph-mon[74676]: pgmap v87: 162 pgs: 62 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:25 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:25 compute-0 ceph-mon[74676]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:25 compute-0 ceph-mon[74676]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb 02 11:12:25 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:25 compute-0 ceph-mon[74676]: Saving service ingress.rgw.default spec with placement count:2
Feb 02 11:12:25 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Feb 02 11:12:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Feb 02 11:12:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Feb 02 11:12:25 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 31 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=13.560984612s) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 77.608695984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:25 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 31 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=13.560984612s) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown pruub 77.608695984s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:25 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Feb 02 11:12:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:25 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:25 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:25 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1439877520; not ready for session (expect reconnect)
Feb 02 11:12:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:25 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:25 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:25 compute-0 python3[87753]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 11:12:25 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Feb 02 11:12:25 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Feb 02 11:12:26 compute-0 python3[87824]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770030745.4962764-37399-253517250124107/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:12:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:12:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:12:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Feb 02 11:12:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Feb 02 11:12:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Feb 02 11:12:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:12:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:12:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:12:26 compute-0 sudo[87849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 11:12:26 compute-0 sudo[87849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[87849]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 sudo[87874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph
Feb 02 11:12:26 compute-0 sudo[87874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[87874]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 sudo[87899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:12:26 compute-0 sudo[87899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[87899]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 sudo[87924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:12:26 compute-0 sudo[87924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[87924]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 sudo[87949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:12:26 compute-0 sudo[87949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[87949]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 sudo[88020]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eezpkbojaakcoflpwomvzvdxdjldatcq ; /usr/bin/python3'
Feb 02 11:12:26 compute-0 sudo[88020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:26 compute-0 sudo[88021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:12:26 compute-0 sudo[88021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[88021]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 sudo[88048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:12:26 compute-0 sudo[88048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[88048]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 sudo[88073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 02 11:12:26 compute-0 sudo[88073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[88073]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mon[74676]: 2.a scrub starts
Feb 02 11:12:26 compute-0 ceph-mon[74676]: 2.a scrub ok
Feb 02 11:12:26 compute-0 ceph-mon[74676]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Feb 02 11:12:26 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:12:26 compute-0 ceph-mon[74676]: osdmap e31: 3 total, 2 up, 3 in
Feb 02 11:12:26 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:26 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:26 compute-0 ceph-mon[74676]: 4.10 deep-scrub starts
Feb 02 11:12:26 compute-0 ceph-mon[74676]: 4.10 deep-scrub ok
Feb 02 11:12:26 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:26 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:26 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:12:26 compute-0 ceph-mon[74676]: Adjusting osd_memory_target on compute-2 to 127.9M
Feb 02 11:12:26 compute-0 ceph-mon[74676]: Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Feb 02 11:12:26 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:26 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:12:26 compute-0 ceph-mon[74676]: Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mon[74676]: Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mon[74676]: Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:12:26 compute-0 python3[88026]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:26 compute-0 sudo[88098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:12:26 compute-0 sudo[88098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[88098]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 podman[88104]: 2026-02-02 11:12:26.673697655 +0000 UTC m=+0.041648226 container create 6104845aaca56105950bd6678954e330aed0afe3d1e7fc3cbd38839c05a59a6d (image=quay.io/ceph/ceph:v19, name=hungry_austin, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb 02 11:12:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Feb 02 11:12:26 compute-0 systemd[1]: Started libpod-conmon-6104845aaca56105950bd6678954e330aed0afe3d1e7fc3cbd38839c05a59a6d.scope.
Feb 02 11:12:26 compute-0 sudo[88134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:12:26 compute-0 sudo[88134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[88134]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1439877520; not ready for session (expect reconnect)
Feb 02 11:12:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:26 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb 02 11:12:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37532ac576ac92dd63e1f38b513b3ec709e972e90c9335ae9c124db80b8a59ee/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37532ac576ac92dd63e1f38b513b3ec709e972e90c9335ae9c124db80b8a59ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37532ac576ac92dd63e1f38b513b3ec709e972e90c9335ae9c124db80b8a59ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Feb 02 11:12:26 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Feb 02 11:12:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:26 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1f( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1c( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1d( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.12( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.13( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.10( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.11( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.16( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.17( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.15( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.a( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.b( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.8( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.9( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.e( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 podman[88104]: 2026-02-02 11:12:26.65666798 +0000 UTC m=+0.024618561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.6( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 podman[88104]: 2026-02-02 11:12:26.752908168 +0000 UTC m=+0.120858769 container init 6104845aaca56105950bd6678954e330aed0afe3d1e7fc3cbd38839c05a59a6d (image=quay.io/ceph/ceph:v19, name=hungry_austin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.5( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.4( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.14( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.7( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.3( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.2( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.d( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.c( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.f( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1e( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.19( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1b( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.18( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1a( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:26 compute-0 podman[88104]: 2026-02-02 11:12:26.757313293 +0000 UTC m=+0.125263864 container start 6104845aaca56105950bd6678954e330aed0afe3d1e7fc3cbd38839c05a59a6d (image=quay.io/ceph/ceph:v19, name=hungry_austin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 sudo[88167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:12:26 compute-0 sudo[88167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.13( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.10( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.11( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.16( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.17( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.12( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 sudo[88167]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.15( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.8( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.9( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 podman[88104]: 2026-02-02 11:12:26.773464193 +0000 UTC m=+0.141414794 container attach 6104845aaca56105950bd6678954e330aed0afe3d1e7fc3cbd38839c05a59a6d (image=quay.io/ceph/ceph:v19, name=hungry_austin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.6( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.0( empty local-lis/les=31/32 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.14( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.5( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.4( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.7( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.2( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.19( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.18( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.1a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 32 pg[7.3( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [1] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:26 compute-0 sudo[88193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:12:26 compute-0 sudo[88193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[88193]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 sudo[88218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:12:26 compute-0 sudo[88218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[88218]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:26 compute-0 sudo[88285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:12:26 compute-0 sudo[88285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:26 compute-0 sudo[88285]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v90: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:27 compute-0 sudo[88310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:12:27 compute-0 sudo[88310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:27 compute-0 sudo[88310]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:27 compute-0 sudo[88335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:27 compute-0 sudo[88335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:27 compute-0 sudo[88335]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service node-exporter spec with placement *
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Feb 02 11:12:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:12:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:12:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Feb 02 11:12:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:12:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Feb 02 11:12:27 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Feb 02 11:12:27 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1439877520; not ready for session (expect reconnect)
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:27 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:27 compute-0 ceph-mon[74676]: purged_snaps scrub starts
Feb 02 11:12:27 compute-0 ceph-mon[74676]: purged_snaps scrub ok
Feb 02 11:12:27 compute-0 ceph-mon[74676]: 2.1f scrub starts
Feb 02 11:12:27 compute-0 ceph-mon[74676]: 2.1f scrub ok
Feb 02 11:12:27 compute-0 ceph-mon[74676]: Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:27 compute-0 ceph-mon[74676]: Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:27 compute-0 ceph-mon[74676]: Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:27 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:27 compute-0 ceph-mon[74676]: osdmap e32: 3 total, 2 up, 3 in
Feb 02 11:12:27 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:27 compute-0 ceph-mon[74676]: 4.1e scrub starts
Feb 02 11:12:27 compute-0 ceph-mon[74676]: 4.1e scrub ok
Feb 02 11:12:27 compute-0 ceph-mon[74676]: pgmap v90: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:27 compute-0 ceph-mon[74676]: from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:27 compute-0 ceph-mon[74676]: Saving service node-exporter spec with placement *
Feb 02 11:12:27 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: Saving service grafana spec with placement compute-0;count:1
Feb 02 11:12:27 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:27 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 11 completed events
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:12:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:12:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:28 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Feb 02 11:12:28 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Feb 02 11:12:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Feb 02 11:12:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:12:28 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:12:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:12:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:12:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:12:28 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:28 compute-0 sudo[88361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:28 compute-0 sudo[88361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:28 compute-0 sudo[88361]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:28 compute-0 sudo[88386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:12:28 compute-0 sudo[88386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:28 compute-0 hungry_austin[88163]: Scheduled node-exporter update...
Feb 02 11:12:28 compute-0 hungry_austin[88163]: Scheduled grafana update...
Feb 02 11:12:28 compute-0 hungry_austin[88163]: Scheduled prometheus update...
Feb 02 11:12:28 compute-0 hungry_austin[88163]: Scheduled alertmanager update...
Feb 02 11:12:28 compute-0 systemd[1]: libpod-6104845aaca56105950bd6678954e330aed0afe3d1e7fc3cbd38839c05a59a6d.scope: Deactivated successfully.
Feb 02 11:12:28 compute-0 podman[88104]: 2026-02-02 11:12:28.576494654 +0000 UTC m=+1.944445225 container died 6104845aaca56105950bd6678954e330aed0afe3d1e7fc3cbd38839c05a59a6d (image=quay.io/ceph/ceph:v19, name=hungry_austin, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-37532ac576ac92dd63e1f38b513b3ec709e972e90c9335ae9c124db80b8a59ee-merged.mount: Deactivated successfully.
Feb 02 11:12:28 compute-0 podman[88104]: 2026-02-02 11:12:28.685100333 +0000 UTC m=+2.053050904 container remove 6104845aaca56105950bd6678954e330aed0afe3d1e7fc3cbd38839c05a59a6d (image=quay.io/ceph/ceph:v19, name=hungry_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 11:12:28 compute-0 systemd[1]: libpod-conmon-6104845aaca56105950bd6678954e330aed0afe3d1e7fc3cbd38839c05a59a6d.scope: Deactivated successfully.
Feb 02 11:12:28 compute-0 sudo[88020]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:28 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Feb 02 11:12:28 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Feb 02 11:12:28 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1439877520; not ready for session (expect reconnect)
Feb 02 11:12:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:28 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:28 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:28 compute-0 ceph-mon[74676]: 2.9 scrub starts
Feb 02 11:12:28 compute-0 ceph-mon[74676]: 2.9 scrub ok
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:28 compute-0 ceph-mon[74676]: Saving service prometheus spec with placement compute-0;count:1
Feb 02 11:12:28 compute-0 ceph-mon[74676]: 2.7 scrub starts
Feb 02 11:12:28 compute-0 ceph-mon[74676]: 2.7 scrub ok
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:28 compute-0 ceph-mon[74676]: 4.11 deep-scrub starts
Feb 02 11:12:28 compute-0 ceph-mon[74676]: 4.11 deep-scrub ok
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:28 compute-0 ceph-mon[74676]: Saving service alertmanager spec with placement compute-0;count:1
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:28 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:28 compute-0 podman[88464]: 2026-02-02 11:12:28.802234236 +0000 UTC m=+0.036689915 container create e6d198dd885b16123ec8e8d7827a19b72528906eaca874a689ae807e57be20d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:12:28 compute-0 systemd[1]: Started libpod-conmon-e6d198dd885b16123ec8e8d7827a19b72528906eaca874a689ae807e57be20d5.scope.
Feb 02 11:12:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:28 compute-0 podman[88464]: 2026-02-02 11:12:28.785847599 +0000 UTC m=+0.020303308 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:28 compute-0 podman[88464]: 2026-02-02 11:12:28.889936671 +0000 UTC m=+0.124392370 container init e6d198dd885b16123ec8e8d7827a19b72528906eaca874a689ae807e57be20d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yonath, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:12:28 compute-0 podman[88464]: 2026-02-02 11:12:28.89554343 +0000 UTC m=+0.129999109 container start e6d198dd885b16123ec8e8d7827a19b72528906eaca874a689ae807e57be20d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yonath, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb 02 11:12:28 compute-0 brave_yonath[88481]: 167 167
Feb 02 11:12:28 compute-0 systemd[1]: libpod-e6d198dd885b16123ec8e8d7827a19b72528906eaca874a689ae807e57be20d5.scope: Deactivated successfully.
Feb 02 11:12:28 compute-0 conmon[88481]: conmon e6d198dd885b16123ec8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6d198dd885b16123ec8e8d7827a19b72528906eaca874a689ae807e57be20d5.scope/container/memory.events
Feb 02 11:12:28 compute-0 podman[88464]: 2026-02-02 11:12:28.903789495 +0000 UTC m=+0.138245174 container attach e6d198dd885b16123ec8e8d7827a19b72528906eaca874a689ae807e57be20d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:12:28 compute-0 podman[88464]: 2026-02-02 11:12:28.904224577 +0000 UTC m=+0.138680256 container died e6d198dd885b16123ec8e8d7827a19b72528906eaca874a689ae807e57be20d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yonath, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d0398802c4797fdbce6de60361987fd111fefc37cbf6a250f0ad7356cfa7143-merged.mount: Deactivated successfully.
Feb 02 11:12:28 compute-0 podman[88464]: 2026-02-02 11:12:28.963562845 +0000 UTC m=+0.198018514 container remove e6d198dd885b16123ec8e8d7827a19b72528906eaca874a689ae807e57be20d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:12:28 compute-0 systemd[1]: libpod-conmon-e6d198dd885b16123ec8e8d7827a19b72528906eaca874a689ae807e57be20d5.scope: Deactivated successfully.
Feb 02 11:12:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v91: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:29 compute-0 podman[88505]: 2026-02-02 11:12:29.123384361 +0000 UTC m=+0.069910069 container create 18b229d5c4193ff01796e5b7d649a873fadfd9579c31aa3424abae227ee07979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:12:29 compute-0 systemd[1]: Started libpod-conmon-18b229d5c4193ff01796e5b7d649a873fadfd9579c31aa3424abae227ee07979.scope.
Feb 02 11:12:29 compute-0 podman[88505]: 2026-02-02 11:12:29.075698285 +0000 UTC m=+0.022224013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d86780794edb6ffcce6d7a2a8db0204945566a25275fb885b67410814fc0ca5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d86780794edb6ffcce6d7a2a8db0204945566a25275fb885b67410814fc0ca5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d86780794edb6ffcce6d7a2a8db0204945566a25275fb885b67410814fc0ca5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d86780794edb6ffcce6d7a2a8db0204945566a25275fb885b67410814fc0ca5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d86780794edb6ffcce6d7a2a8db0204945566a25275fb885b67410814fc0ca5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:29 compute-0 sudo[88547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mandzdwblapkejijjvrqtxrfbngahbgr ; /usr/bin/python3'
Feb 02 11:12:29 compute-0 sudo[88547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:29 compute-0 podman[88505]: 2026-02-02 11:12:29.228687337 +0000 UTC m=+0.175213075 container init 18b229d5c4193ff01796e5b7d649a873fadfd9579c31aa3424abae227ee07979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_liskov, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:29 compute-0 podman[88505]: 2026-02-02 11:12:29.237949211 +0000 UTC m=+0.184474919 container start 18b229d5c4193ff01796e5b7d649a873fadfd9579c31aa3424abae227ee07979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 11:12:29 compute-0 podman[88505]: 2026-02-02 11:12:29.294929082 +0000 UTC m=+0.241454820 container attach 18b229d5c4193ff01796e5b7d649a873fadfd9579c31aa3424abae227ee07979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_liskov, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:29 compute-0 python3[88549]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:29 compute-0 podman[88552]: 2026-02-02 11:12:29.407476332 +0000 UTC m=+0.038061823 container create a6f04ce09d76a854c0879f69849cdf0576b2ea785fc707278d9a7b46defbf0a4 (image=quay.io/ceph/ceph:v19, name=determined_hopper, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:12:29 compute-0 podman[88552]: 2026-02-02 11:12:29.392721023 +0000 UTC m=+0.023306534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:29 compute-0 systemd[1]: Started libpod-conmon-a6f04ce09d76a854c0879f69849cdf0576b2ea785fc707278d9a7b46defbf0a4.scope.
Feb 02 11:12:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd81ded7ad645e92f4dd19a6effd02bc715e57925951011cc064d47146ac3a92/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd81ded7ad645e92f4dd19a6effd02bc715e57925951011cc064d47146ac3a92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd81ded7ad645e92f4dd19a6effd02bc715e57925951011cc064d47146ac3a92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:29 compute-0 silly_liskov[88521]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:12:29 compute-0 silly_liskov[88521]: --> All data devices are unavailable
Feb 02 11:12:29 compute-0 podman[88552]: 2026-02-02 11:12:29.560734972 +0000 UTC m=+0.191320483 container init a6f04ce09d76a854c0879f69849cdf0576b2ea785fc707278d9a7b46defbf0a4 (image=quay.io/ceph/ceph:v19, name=determined_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:29 compute-0 podman[88552]: 2026-02-02 11:12:29.566190507 +0000 UTC m=+0.196775998 container start a6f04ce09d76a854c0879f69849cdf0576b2ea785fc707278d9a7b46defbf0a4 (image=quay.io/ceph/ceph:v19, name=determined_hopper, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb 02 11:12:29 compute-0 podman[88552]: 2026-02-02 11:12:29.574514764 +0000 UTC m=+0.205100285 container attach a6f04ce09d76a854c0879f69849cdf0576b2ea785fc707278d9a7b46defbf0a4 (image=quay.io/ceph/ceph:v19, name=determined_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:29 compute-0 systemd[1]: libpod-18b229d5c4193ff01796e5b7d649a873fadfd9579c31aa3424abae227ee07979.scope: Deactivated successfully.
Feb 02 11:12:29 compute-0 podman[88505]: 2026-02-02 11:12:29.582996855 +0000 UTC m=+0.529522563 container died 18b229d5c4193ff01796e5b7d649a873fadfd9579c31aa3424abae227ee07979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_liskov, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:29 compute-0 podman[88505]: 2026-02-02 11:12:29.631001021 +0000 UTC m=+0.577526729 container remove 18b229d5c4193ff01796e5b7d649a873fadfd9579c31aa3424abae227ee07979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_liskov, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:29 compute-0 systemd[1]: libpod-conmon-18b229d5c4193ff01796e5b7d649a873fadfd9579c31aa3424abae227ee07979.scope: Deactivated successfully.
Feb 02 11:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d86780794edb6ffcce6d7a2a8db0204945566a25275fb885b67410814fc0ca5-merged.mount: Deactivated successfully.
Feb 02 11:12:29 compute-0 sudo[88386]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:29 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Feb 02 11:12:29 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1439877520; not ready for session (expect reconnect)
Feb 02 11:12:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:29 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:29 compute-0 sudo[88614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:29 compute-0 sudo[88614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:29 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Feb 02 11:12:29 compute-0 sudo[88614]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:29 compute-0 sudo[88639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:12:29 compute-0 sudo[88639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Feb 02 11:12:30 compute-0 ceph-mon[74676]: 2.8 scrub starts
Feb 02 11:12:30 compute-0 ceph-mon[74676]: 2.8 scrub ok
Feb 02 11:12:30 compute-0 ceph-mon[74676]: 4.15 scrub starts
Feb 02 11:12:30 compute-0 ceph-mon[74676]: 4.15 scrub ok
Feb 02 11:12:30 compute-0 ceph-mon[74676]: pgmap v91: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:30 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:30 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3132421544' entity='client.admin' 
Feb 02 11:12:30 compute-0 systemd[1]: libpod-a6f04ce09d76a854c0879f69849cdf0576b2ea785fc707278d9a7b46defbf0a4.scope: Deactivated successfully.
Feb 02 11:12:30 compute-0 podman[88552]: 2026-02-02 11:12:30.07141224 +0000 UTC m=+0.701997731 container died a6f04ce09d76a854c0879f69849cdf0576b2ea785fc707278d9a7b46defbf0a4 (image=quay.io/ceph/ceph:v19, name=determined_hopper, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd81ded7ad645e92f4dd19a6effd02bc715e57925951011cc064d47146ac3a92-merged.mount: Deactivated successfully.
Feb 02 11:12:30 compute-0 podman[88552]: 2026-02-02 11:12:30.11993484 +0000 UTC m=+0.750520331 container remove a6f04ce09d76a854c0879f69849cdf0576b2ea785fc707278d9a7b46defbf0a4 (image=quay.io/ceph/ceph:v19, name=determined_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:12:30 compute-0 systemd[1]: libpod-conmon-a6f04ce09d76a854c0879f69849cdf0576b2ea785fc707278d9a7b46defbf0a4.scope: Deactivated successfully.
Feb 02 11:12:30 compute-0 sudo[88547]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:30 compute-0 podman[88704]: 2026-02-02 11:12:30.148123042 +0000 UTC m=+0.069664483 container create ed36cf4915acc8547b3381653f18efbb792630ccfd5eba14d1163515ac8189e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gauss, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:12:30 compute-0 systemd[1]: Started libpod-conmon-ed36cf4915acc8547b3381653f18efbb792630ccfd5eba14d1163515ac8189e0.scope.
Feb 02 11:12:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:30 compute-0 podman[88704]: 2026-02-02 11:12:30.212498843 +0000 UTC m=+0.134040464 container init ed36cf4915acc8547b3381653f18efbb792630ccfd5eba14d1163515ac8189e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:30 compute-0 podman[88704]: 2026-02-02 11:12:30.119454056 +0000 UTC m=+0.040995497 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:30 compute-0 podman[88704]: 2026-02-02 11:12:30.219512063 +0000 UTC m=+0.141053504 container start ed36cf4915acc8547b3381653f18efbb792630ccfd5eba14d1163515ac8189e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gauss, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:30 compute-0 optimistic_gauss[88733]: 167 167
Feb 02 11:12:30 compute-0 systemd[1]: libpod-ed36cf4915acc8547b3381653f18efbb792630ccfd5eba14d1163515ac8189e0.scope: Deactivated successfully.
Feb 02 11:12:30 compute-0 podman[88704]: 2026-02-02 11:12:30.225802392 +0000 UTC m=+0.147343833 container attach ed36cf4915acc8547b3381653f18efbb792630ccfd5eba14d1163515ac8189e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb 02 11:12:30 compute-0 podman[88704]: 2026-02-02 11:12:30.227286304 +0000 UTC m=+0.148827745 container died ed36cf4915acc8547b3381653f18efbb792630ccfd5eba14d1163515ac8189e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gauss, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:30 compute-0 sudo[88769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khyuwgqrzlsgrdzynencnfbaclyqiotj ; /usr/bin/python3'
Feb 02 11:12:30 compute-0 sudo[88769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-478d5a6b84717609eb3ab993b3b090ef499aac88d58eb406a94409b651274b2f-merged.mount: Deactivated successfully.
Feb 02 11:12:30 compute-0 podman[88704]: 2026-02-02 11:12:30.329657836 +0000 UTC m=+0.251199267 container remove ed36cf4915acc8547b3381653f18efbb792630ccfd5eba14d1163515ac8189e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gauss, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:12:30 compute-0 systemd[1]: libpod-conmon-ed36cf4915acc8547b3381653f18efbb792630ccfd5eba14d1163515ac8189e0.scope: Deactivated successfully.
Feb 02 11:12:30 compute-0 python3[88774]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:30 compute-0 podman[88782]: 2026-02-02 11:12:30.475956608 +0000 UTC m=+0.055491700 container create 86d3a53eebc23b85c414e66b168b53714f83e1d176a857a235327aea72ef2975 (image=quay.io/ceph/ceph:v19, name=blissful_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:12:30 compute-0 podman[88784]: 2026-02-02 11:12:30.507082393 +0000 UTC m=+0.082441126 container create 0af386d0adc36fcbc946e5ab2255e67d55f0ce7d02049d150dad29f81d47994e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:12:30 compute-0 systemd[1]: Started libpod-conmon-86d3a53eebc23b85c414e66b168b53714f83e1d176a857a235327aea72ef2975.scope.
Feb 02 11:12:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:30 compute-0 systemd[1]: Started libpod-conmon-0af386d0adc36fcbc946e5ab2255e67d55f0ce7d02049d150dad29f81d47994e.scope.
Feb 02 11:12:30 compute-0 podman[88782]: 2026-02-02 11:12:30.446385067 +0000 UTC m=+0.025920189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d86b6dff29e461765036b158f44715e39b977701c8860c27bd24b465c668dc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d86b6dff29e461765036b158f44715e39b977701c8860c27bd24b465c668dc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d86b6dff29e461765036b158f44715e39b977701c8860c27bd24b465c668dc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:30 compute-0 podman[88784]: 2026-02-02 11:12:30.452416748 +0000 UTC m=+0.027775501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:30 compute-0 podman[88782]: 2026-02-02 11:12:30.563264012 +0000 UTC m=+0.142799124 container init 86d3a53eebc23b85c414e66b168b53714f83e1d176a857a235327aea72ef2975 (image=quay.io/ceph/ceph:v19, name=blissful_dubinsky, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:12:30 compute-0 podman[88782]: 2026-02-02 11:12:30.568727787 +0000 UTC m=+0.148262879 container start 86d3a53eebc23b85c414e66b168b53714f83e1d176a857a235327aea72ef2975 (image=quay.io/ceph/ceph:v19, name=blissful_dubinsky, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9819f7888bac1e9d5b2ef028dfd1f87185d077b7991fe7f7f67894ce89680a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9819f7888bac1e9d5b2ef028dfd1f87185d077b7991fe7f7f67894ce89680a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9819f7888bac1e9d5b2ef028dfd1f87185d077b7991fe7f7f67894ce89680a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9819f7888bac1e9d5b2ef028dfd1f87185d077b7991fe7f7f67894ce89680a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:30 compute-0 podman[88782]: 2026-02-02 11:12:30.578588457 +0000 UTC m=+0.158123569 container attach 86d3a53eebc23b85c414e66b168b53714f83e1d176a857a235327aea72ef2975 (image=quay.io/ceph/ceph:v19, name=blissful_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:12:30 compute-0 podman[88784]: 2026-02-02 11:12:30.591521585 +0000 UTC m=+0.166880348 container init 0af386d0adc36fcbc946e5ab2255e67d55f0ce7d02049d150dad29f81d47994e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mclaren, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:30 compute-0 podman[88784]: 2026-02-02 11:12:30.597108614 +0000 UTC m=+0.172467347 container start 0af386d0adc36fcbc946e5ab2255e67d55f0ce7d02049d150dad29f81d47994e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:30 compute-0 podman[88784]: 2026-02-02 11:12:30.602805216 +0000 UTC m=+0.178163979 container attach 0af386d0adc36fcbc946e5ab2255e67d55f0ce7d02049d150dad29f81d47994e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Feb 02 11:12:30 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb 02 11:12:30 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb 02 11:12:30 compute-0 ceph-mgr[74969]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1439877520; not ready for session (expect reconnect)
Feb 02 11:12:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:30 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:30 compute-0 ceph-mgr[74969]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 11:12:30 compute-0 objective_mclaren[88817]: {
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:     "1": [
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:         {
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "devices": [
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "/dev/loop3"
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             ],
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "lv_name": "ceph_lv0",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "lv_size": "21470642176",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "name": "ceph_lv0",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "tags": {
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.cluster_name": "ceph",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.crush_device_class": "",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.encrypted": "0",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.osd_id": "1",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.type": "block",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.vdo": "0",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:                 "ceph.with_tpm": "0"
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             },
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "type": "block",
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:             "vg_name": "ceph_vg0"
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:         }
Feb 02 11:12:30 compute-0 objective_mclaren[88817]:     ]
Feb 02 11:12:30 compute-0 objective_mclaren[88817]: }
Feb 02 11:12:30 compute-0 systemd[1]: libpod-0af386d0adc36fcbc946e5ab2255e67d55f0ce7d02049d150dad29f81d47994e.scope: Deactivated successfully.
Feb 02 11:12:30 compute-0 podman[88784]: 2026-02-02 11:12:30.911240611 +0000 UTC m=+0.486599344 container died 0af386d0adc36fcbc946e5ab2255e67d55f0ce7d02049d150dad29f81d47994e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mclaren, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:12:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Feb 02 11:12:30 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/816624082' entity='client.admin' 
Feb 02 11:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc9819f7888bac1e9d5b2ef028dfd1f87185d077b7991fe7f7f67894ce89680a-merged.mount: Deactivated successfully.
Feb 02 11:12:30 compute-0 podman[88784]: 2026-02-02 11:12:30.955330365 +0000 UTC m=+0.530689098 container remove 0af386d0adc36fcbc946e5ab2255e67d55f0ce7d02049d150dad29f81d47994e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mclaren, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:12:30 compute-0 systemd[1]: libpod-86d3a53eebc23b85c414e66b168b53714f83e1d176a857a235327aea72ef2975.scope: Deactivated successfully.
Feb 02 11:12:30 compute-0 podman[88782]: 2026-02-02 11:12:30.961208292 +0000 UTC m=+0.540743384 container died 86d3a53eebc23b85c414e66b168b53714f83e1d176a857a235327aea72ef2975 (image=quay.io/ceph/ceph:v19, name=blissful_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:12:30 compute-0 systemd[1]: libpod-conmon-0af386d0adc36fcbc946e5ab2255e67d55f0ce7d02049d150dad29f81d47994e.scope: Deactivated successfully.
Feb 02 11:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-61d86b6dff29e461765036b158f44715e39b977701c8860c27bd24b465c668dc-merged.mount: Deactivated successfully.
Feb 02 11:12:30 compute-0 sudo[88639]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:31 compute-0 podman[88782]: 2026-02-02 11:12:31.004210065 +0000 UTC m=+0.583745157 container remove 86d3a53eebc23b85c414e66b168b53714f83e1d176a857a235327aea72ef2975 (image=quay.io/ceph/ceph:v19, name=blissful_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:31 compute-0 systemd[1]: libpod-conmon-86d3a53eebc23b85c414e66b168b53714f83e1d176a857a235327aea72ef2975.scope: Deactivated successfully.
Feb 02 11:12:31 compute-0 sudo[88769]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Feb 02 11:12:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Feb 02 11:12:31 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1439877520,v1:192.168.122.102:6801/1439877520] boot
Feb 02 11:12:31 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Feb 02 11:12:31 compute-0 sudo[88871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:31 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:31 compute-0 sudo[88871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:31 compute-0 sudo[88871]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:31 compute-0 ceph-mon[74676]: 2.6 scrub starts
Feb 02 11:12:31 compute-0 ceph-mon[74676]: 2.6 scrub ok
Feb 02 11:12:31 compute-0 ceph-mon[74676]: 4.14 scrub starts
Feb 02 11:12:31 compute-0 ceph-mon[74676]: 4.14 scrub ok
Feb 02 11:12:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3132421544' entity='client.admin' 
Feb 02 11:12:31 compute-0 ceph-mon[74676]: OSD bench result of 3518.015014 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 02 11:12:31 compute-0 ceph-mon[74676]: 2.5 scrub starts
Feb 02 11:12:31 compute-0 ceph-mon[74676]: 2.5 scrub ok
Feb 02 11:12:31 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/816624082' entity='client.admin' 
Feb 02 11:12:31 compute-0 sudo[88896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:12:31 compute-0 sudo[88896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:31 compute-0 sudo[88944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtofdifekwmybkumissgrfarimdcpvxt ; /usr/bin/python3'
Feb 02 11:12:31 compute-0 sudo[88944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:31 compute-0 python3[88946]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:31 compute-0 podman[88947]: 2026-02-02 11:12:31.353366838 +0000 UTC m=+0.041513582 container create 0413e6dcfee12ab652fc01ca5084da3faa7224e8efa0fe1cfc9c70a94ff904c4 (image=quay.io/ceph/ceph:v19, name=mystifying_kowalevski, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 11:12:31 compute-0 systemd[1]: Started libpod-conmon-0413e6dcfee12ab652fc01ca5084da3faa7224e8efa0fe1cfc9c70a94ff904c4.scope.
Feb 02 11:12:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c03ced6b289fbecc7c2965116e328c796efa7f40565da0351b8f721d4ebfebf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c03ced6b289fbecc7c2965116e328c796efa7f40565da0351b8f721d4ebfebf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c03ced6b289fbecc7c2965116e328c796efa7f40565da0351b8f721d4ebfebf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:31 compute-0 podman[88947]: 2026-02-02 11:12:31.432599862 +0000 UTC m=+0.120746636 container init 0413e6dcfee12ab652fc01ca5084da3faa7224e8efa0fe1cfc9c70a94ff904c4 (image=quay.io/ceph/ceph:v19, name=mystifying_kowalevski, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:31 compute-0 podman[88947]: 2026-02-02 11:12:31.335611153 +0000 UTC m=+0.023757917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:31 compute-0 podman[88947]: 2026-02-02 11:12:31.439599901 +0000 UTC m=+0.127746645 container start 0413e6dcfee12ab652fc01ca5084da3faa7224e8efa0fe1cfc9c70a94ff904c4 (image=quay.io/ceph/ceph:v19, name=mystifying_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:12:31 compute-0 podman[88947]: 2026-02-02 11:12:31.443262545 +0000 UTC m=+0.131409319 container attach 0413e6dcfee12ab652fc01ca5084da3faa7224e8efa0fe1cfc9c70a94ff904c4 (image=quay.io/ceph/ceph:v19, name=mystifying_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:12:31 compute-0 podman[89005]: 2026-02-02 11:12:31.556653471 +0000 UTC m=+0.044530698 container create 19db5735442a47e59fc35e08b9b48312341fdf1849e92dc4bd703149da2fcf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:31 compute-0 systemd[1]: Started libpod-conmon-19db5735442a47e59fc35e08b9b48312341fdf1849e92dc4bd703149da2fcf35.scope.
Feb 02 11:12:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:31 compute-0 podman[89005]: 2026-02-02 11:12:31.630543683 +0000 UTC m=+0.118420940 container init 19db5735442a47e59fc35e08b9b48312341fdf1849e92dc4bd703149da2fcf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_elbakyan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:31 compute-0 podman[89005]: 2026-02-02 11:12:31.535091368 +0000 UTC m=+0.022968615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:31 compute-0 podman[89005]: 2026-02-02 11:12:31.636885483 +0000 UTC m=+0.124762710 container start 19db5735442a47e59fc35e08b9b48312341fdf1849e92dc4bd703149da2fcf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:12:31 compute-0 podman[89005]: 2026-02-02 11:12:31.641147745 +0000 UTC m=+0.129025022 container attach 19db5735442a47e59fc35e08b9b48312341fdf1849e92dc4bd703149da2fcf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:12:31 compute-0 hungry_elbakyan[89039]: 167 167
Feb 02 11:12:31 compute-0 systemd[1]: libpod-19db5735442a47e59fc35e08b9b48312341fdf1849e92dc4bd703149da2fcf35.scope: Deactivated successfully.
Feb 02 11:12:31 compute-0 podman[89005]: 2026-02-02 11:12:31.64378921 +0000 UTC m=+0.131666447 container died 19db5735442a47e59fc35e08b9b48312341fdf1849e92dc4bd703149da2fcf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_elbakyan, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 11:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7372eec30186fb04d4ee851ebcdae6c3c00629b607d4b4ef40468c2231fe47a-merged.mount: Deactivated successfully.
Feb 02 11:12:31 compute-0 podman[89005]: 2026-02-02 11:12:31.677270022 +0000 UTC m=+0.165147249 container remove 19db5735442a47e59fc35e08b9b48312341fdf1849e92dc4bd703149da2fcf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_elbakyan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:31 compute-0 systemd[1]: libpod-conmon-19db5735442a47e59fc35e08b9b48312341fdf1849e92dc4bd703149da2fcf35.scope: Deactivated successfully.
Feb 02 11:12:31 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Feb 02 11:12:31 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Feb 02 11:12:31 compute-0 podman[89063]: 2026-02-02 11:12:31.79883781 +0000 UTC m=+0.038017502 container create b6e99dc79bc7e822b9565e4757c4c7688da81a5352e3419dcb5d421d256cf12b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hawking, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:12:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Feb 02 11:12:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1556289850' entity='client.admin' 
Feb 02 11:12:31 compute-0 podman[88947]: 2026-02-02 11:12:31.835711539 +0000 UTC m=+0.523858283 container died 0413e6dcfee12ab652fc01ca5084da3faa7224e8efa0fe1cfc9c70a94ff904c4 (image=quay.io/ceph/ceph:v19, name=mystifying_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:31 compute-0 systemd[1]: Started libpod-conmon-b6e99dc79bc7e822b9565e4757c4c7688da81a5352e3419dcb5d421d256cf12b.scope.
Feb 02 11:12:31 compute-0 systemd[1]: libpod-0413e6dcfee12ab652fc01ca5084da3faa7224e8efa0fe1cfc9c70a94ff904c4.scope: Deactivated successfully.
Feb 02 11:12:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c03ced6b289fbecc7c2965116e328c796efa7f40565da0351b8f721d4ebfebf-merged.mount: Deactivated successfully.
Feb 02 11:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/044e54c731b5eb55e77d8c3bf3d12d867df3dba78718cb63dfaeb8f20898598c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:31 compute-0 podman[88947]: 2026-02-02 11:12:31.871566429 +0000 UTC m=+0.559713173 container remove 0413e6dcfee12ab652fc01ca5084da3faa7224e8efa0fe1cfc9c70a94ff904c4 (image=quay.io/ceph/ceph:v19, name=mystifying_kowalevski, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/044e54c731b5eb55e77d8c3bf3d12d867df3dba78718cb63dfaeb8f20898598c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/044e54c731b5eb55e77d8c3bf3d12d867df3dba78718cb63dfaeb8f20898598c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/044e54c731b5eb55e77d8c3bf3d12d867df3dba78718cb63dfaeb8f20898598c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:31 compute-0 podman[89063]: 2026-02-02 11:12:31.78124364 +0000 UTC m=+0.020423362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:31 compute-0 sudo[88944]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:31 compute-0 systemd[1]: libpod-conmon-0413e6dcfee12ab652fc01ca5084da3faa7224e8efa0fe1cfc9c70a94ff904c4.scope: Deactivated successfully.
Feb 02 11:12:31 compute-0 podman[89063]: 2026-02-02 11:12:31.893343909 +0000 UTC m=+0.132523621 container init b6e99dc79bc7e822b9565e4757c4c7688da81a5352e3419dcb5d421d256cf12b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:12:31 compute-0 podman[89063]: 2026-02-02 11:12:31.897868148 +0000 UTC m=+0.137047840 container start b6e99dc79bc7e822b9565e4757c4c7688da81a5352e3419dcb5d421d256cf12b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:31 compute-0 podman[89063]: 2026-02-02 11:12:31.902235572 +0000 UTC m=+0.141415294 container attach b6e99dc79bc7e822b9565e4757c4c7688da81a5352e3419dcb5d421d256cf12b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb 02 11:12:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Feb 02 11:12:32 compute-0 ceph-mon[74676]: 4.12 scrub starts
Feb 02 11:12:32 compute-0 ceph-mon[74676]: 4.12 scrub ok
Feb 02 11:12:32 compute-0 ceph-mon[74676]: pgmap v92: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Feb 02 11:12:32 compute-0 ceph-mon[74676]: osd.2 [v2:192.168.122.102:6800/1439877520,v1:192.168.122.102:6801/1439877520] boot
Feb 02 11:12:32 compute-0 ceph-mon[74676]: osdmap e33: 3 total, 3 up, 3 in
Feb 02 11:12:32 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1556289850' entity='client.admin' 
Feb 02 11:12:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Feb 02 11:12:32 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Feb 02 11:12:32 compute-0 sudo[89143]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aomjbmxphroanvdeunvobrsgopunqtlr ; /usr/bin/python3'
Feb 02 11:12:32 compute-0 sudo[89143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:32 compute-0 python3[89150]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:32 compute-0 sudo[89143]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:32 compute-0 lvm[89203]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:12:32 compute-0 lvm[89203]: VG ceph_vg0 finished
Feb 02 11:12:32 compute-0 mystifying_hawking[89083]: {}
Feb 02 11:12:32 compute-0 systemd[1]: libpod-b6e99dc79bc7e822b9565e4757c4c7688da81a5352e3419dcb5d421d256cf12b.scope: Deactivated successfully.
Feb 02 11:12:32 compute-0 systemd[1]: libpod-b6e99dc79bc7e822b9565e4757c4c7688da81a5352e3419dcb5d421d256cf12b.scope: Consumed 1.044s CPU time.
Feb 02 11:12:32 compute-0 podman[89063]: 2026-02-02 11:12:32.627891735 +0000 UTC m=+0.867071427 container died b6e99dc79bc7e822b9565e4757c4c7688da81a5352e3419dcb5d421d256cf12b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-044e54c731b5eb55e77d8c3bf3d12d867df3dba78718cb63dfaeb8f20898598c-merged.mount: Deactivated successfully.
Feb 02 11:12:32 compute-0 podman[89063]: 2026-02-02 11:12:32.675297233 +0000 UTC m=+0.914476925 container remove b6e99dc79bc7e822b9565e4757c4c7688da81a5352e3419dcb5d421d256cf12b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:32 compute-0 systemd[1]: libpod-conmon-b6e99dc79bc7e822b9565e4757c4c7688da81a5352e3419dcb5d421d256cf12b.scope: Deactivated successfully.
Feb 02 11:12:32 compute-0 sudo[89239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqmrzlhgstiunpfperguwgbolznxsomb ; /usr/bin/python3'
Feb 02 11:12:32 compute-0 sudo[89239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:32 compute-0 sudo[88896]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:12:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:12:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:32 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 038fe0f3-ad93-4eaf-bcaa-313e6f088a4e (Updating rgw.rgw deployment (+3 -> 3))
Feb 02 11:12:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.xfsamf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb 02 11:12:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.xfsamf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:12:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.xfsamf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:12:32 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb 02 11:12:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb 02 11:12:32 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb 02 11:12:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:12:32 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:32 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.xfsamf on compute-2
Feb 02 11:12:32 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.xfsamf on compute-2
Feb 02 11:12:32 compute-0 python3[89241]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.dhyzzj/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:32 compute-0 podman[89242]: 2026-02-02 11:12:32.937628615 +0000 UTC m=+0.047480052 container create 2148dfc9714d7df84fdd355f7044ba8a2676855bbf372e1f08576367eb840348 (image=quay.io/ceph/ceph:v19, name=vigorous_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb 02 11:12:32 compute-0 systemd[1]: Started libpod-conmon-2148dfc9714d7df84fdd355f7044ba8a2676855bbf372e1f08576367eb840348.scope.
Feb 02 11:12:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db02f95c3d075ad9485585f0743c26d2f5df52e5f4fb962051f6f13403eaa1c6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db02f95c3d075ad9485585f0743c26d2f5df52e5f4fb962051f6f13403eaa1c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db02f95c3d075ad9485585f0743c26d2f5df52e5f4fb962051f6f13403eaa1c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:33 compute-0 podman[89242]: 2026-02-02 11:12:32.918516921 +0000 UTC m=+0.028368168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:33 compute-0 podman[89242]: 2026-02-02 11:12:33.014351738 +0000 UTC m=+0.124202985 container init 2148dfc9714d7df84fdd355f7044ba8a2676855bbf372e1f08576367eb840348 (image=quay.io/ceph/ceph:v19, name=vigorous_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:12:33 compute-0 podman[89242]: 2026-02-02 11:12:33.023197689 +0000 UTC m=+0.133048916 container start 2148dfc9714d7df84fdd355f7044ba8a2676855bbf372e1f08576367eb840348 (image=quay.io/ceph/ceph:v19, name=vigorous_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:12:33 compute-0 podman[89242]: 2026-02-02 11:12:33.026490623 +0000 UTC m=+0.136341850 container attach 2148dfc9714d7df84fdd355f7044ba8a2676855bbf372e1f08576367eb840348 (image=quay.io/ceph/ceph:v19, name=vigorous_lederberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:12:33 compute-0 ceph-mon[74676]: 2.2 deep-scrub starts
Feb 02 11:12:33 compute-0 ceph-mon[74676]: 2.2 deep-scrub ok
Feb 02 11:12:33 compute-0 ceph-mon[74676]: 4.13 scrub starts
Feb 02 11:12:33 compute-0 ceph-mon[74676]: 4.13 scrub ok
Feb 02 11:12:33 compute-0 ceph-mon[74676]: osdmap e34: 3 total, 3 up, 3 in
Feb 02 11:12:33 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:33 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:33 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.xfsamf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:12:33 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.xfsamf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:12:33 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:33 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.dhyzzj/server_addr}] v 0)
Feb 02 11:12:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3007894501' entity='client.admin' 
Feb 02 11:12:33 compute-0 systemd[1]: libpod-2148dfc9714d7df84fdd355f7044ba8a2676855bbf372e1f08576367eb840348.scope: Deactivated successfully.
Feb 02 11:12:33 compute-0 podman[89242]: 2026-02-02 11:12:33.420559193 +0000 UTC m=+0.530410430 container died 2148dfc9714d7df84fdd355f7044ba8a2676855bbf372e1f08576367eb840348 (image=quay.io/ceph/ceph:v19, name=vigorous_lederberg, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-db02f95c3d075ad9485585f0743c26d2f5df52e5f4fb962051f6f13403eaa1c6-merged.mount: Deactivated successfully.
Feb 02 11:12:33 compute-0 podman[89242]: 2026-02-02 11:12:33.455196749 +0000 UTC m=+0.565047976 container remove 2148dfc9714d7df84fdd355f7044ba8a2676855bbf372e1f08576367eb840348 (image=quay.io/ceph/ceph:v19, name=vigorous_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:33 compute-0 systemd[1]: libpod-conmon-2148dfc9714d7df84fdd355f7044ba8a2676855bbf372e1f08576367eb840348.scope: Deactivated successfully.
Feb 02 11:12:33 compute-0 sudo[89239]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:33 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Feb 02 11:12:33 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Feb 02 11:12:34 compute-0 sudo[89316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwlliywhzbwapmjljatyillucsioeklw ; /usr/bin/python3'
Feb 02 11:12:34 compute-0 sudo[89316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:34 compute-0 python3[89318]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.iybsjv/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:34 compute-0 podman[89319]: 2026-02-02 11:12:34.261159086 +0000 UTC m=+0.064481666 container create 8f559e60da805dac6a8b3e2ac24f93a12c6b4df341fbfd0ebc7faabd4aa74542 (image=quay.io/ceph/ceph:v19, name=recursing_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb 02 11:12:34 compute-0 podman[89319]: 2026-02-02 11:12:34.22511568 +0000 UTC m=+0.028438260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:34 compute-0 systemd[1]: Started libpod-conmon-8f559e60da805dac6a8b3e2ac24f93a12c6b4df341fbfd0ebc7faabd4aa74542.scope.
Feb 02 11:12:34 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d09a93254171a001ba88ce8d1e66c4128ff6f50ed871ce745858e00cab6aa5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d09a93254171a001ba88ce8d1e66c4128ff6f50ed871ce745858e00cab6aa5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d09a93254171a001ba88ce8d1e66c4128ff6f50ed871ce745858e00cab6aa5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:34 compute-0 podman[89319]: 2026-02-02 11:12:34.42616928 +0000 UTC m=+0.229491880 container init 8f559e60da805dac6a8b3e2ac24f93a12c6b4df341fbfd0ebc7faabd4aa74542 (image=quay.io/ceph/ceph:v19, name=recursing_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:34 compute-0 podman[89319]: 2026-02-02 11:12:34.431076329 +0000 UTC m=+0.234398909 container start 8f559e60da805dac6a8b3e2ac24f93a12c6b4df341fbfd0ebc7faabd4aa74542 (image=quay.io/ceph/ceph:v19, name=recursing_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:12:34 compute-0 podman[89319]: 2026-02-02 11:12:34.462692829 +0000 UTC m=+0.266015439 container attach 8f559e60da805dac6a8b3e2ac24f93a12c6b4df341fbfd0ebc7faabd4aa74542 (image=quay.io/ceph/ceph:v19, name=recursing_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:12:34 compute-0 ceph-mon[74676]: 3.15 scrub starts
Feb 02 11:12:34 compute-0 ceph-mon[74676]: 3.15 scrub ok
Feb 02 11:12:34 compute-0 ceph-mon[74676]: 2.1 scrub starts
Feb 02 11:12:34 compute-0 ceph-mon[74676]: 4.16 scrub starts
Feb 02 11:12:34 compute-0 ceph-mon[74676]: 2.1 scrub ok
Feb 02 11:12:34 compute-0 ceph-mon[74676]: 4.16 scrub ok
Feb 02 11:12:34 compute-0 ceph-mon[74676]: Deploying daemon rgw.rgw.compute-2.xfsamf on compute-2
Feb 02 11:12:34 compute-0 ceph-mon[74676]: pgmap v95: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3007894501' entity='client.admin' 
Feb 02 11:12:34 compute-0 ceph-mon[74676]: 5.13 scrub starts
Feb 02 11:12:34 compute-0 ceph-mon[74676]: 5.13 scrub ok
Feb 02 11:12:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:12:34 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.b deep-scrub starts
Feb 02 11:12:34 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.b deep-scrub ok
Feb 02 11:12:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.iybsjv/server_addr}] v 0)
Feb 02 11:12:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:12:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4050598161' entity='client.admin' 
Feb 02 11:12:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:35 compute-0 systemd[1]: libpod-8f559e60da805dac6a8b3e2ac24f93a12c6b4df341fbfd0ebc7faabd4aa74542.scope: Deactivated successfully.
Feb 02 11:12:35 compute-0 podman[89319]: 2026-02-02 11:12:35.004333887 +0000 UTC m=+0.807656467 container died 8f559e60da805dac6a8b3e2ac24f93a12c6b4df341fbfd0ebc7faabd4aa74542 (image=quay.io/ceph/ceph:v19, name=recursing_austin, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 11:12:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 02 11:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-42d09a93254171a001ba88ce8d1e66c4128ff6f50ed871ce745858e00cab6aa5-merged.mount: Deactivated successfully.
Feb 02 11:12:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.jqjceq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb 02 11:12:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.jqjceq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:12:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.jqjceq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:12:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb 02 11:12:35 compute-0 podman[89319]: 2026-02-02 11:12:35.206455227 +0000 UTC m=+1.009777807 container remove 8f559e60da805dac6a8b3e2ac24f93a12c6b4df341fbfd0ebc7faabd4aa74542 (image=quay.io/ceph/ceph:v19, name=recursing_austin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:35 compute-0 systemd[1]: libpod-conmon-8f559e60da805dac6a8b3e2ac24f93a12c6b4df341fbfd0ebc7faabd4aa74542.scope: Deactivated successfully.
Feb 02 11:12:35 compute-0 sudo[89316]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:12:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:35 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.jqjceq on compute-1
Feb 02 11:12:35 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.jqjceq on compute-1
Feb 02 11:12:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Feb 02 11:12:35 compute-0 ceph-mon[74676]: 2.0 scrub starts
Feb 02 11:12:35 compute-0 ceph-mon[74676]: 2.0 scrub ok
Feb 02 11:12:35 compute-0 ceph-mon[74676]: 4.8 scrub starts
Feb 02 11:12:35 compute-0 ceph-mon[74676]: 4.8 scrub ok
Feb 02 11:12:35 compute-0 ceph-mon[74676]: 3.12 scrub starts
Feb 02 11:12:35 compute-0 ceph-mon[74676]: 3.12 scrub ok
Feb 02 11:12:35 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4050598161' entity='client.admin' 
Feb 02 11:12:35 compute-0 ceph-mon[74676]: pgmap v96: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:35 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:35 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:35 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.jqjceq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:12:35 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.jqjceq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:12:35 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:35 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:35 compute-0 ceph-mon[74676]: Deploying daemon rgw.rgw.compute-1.jqjceq on compute-1
Feb 02 11:12:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Feb 02 11:12:35 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Feb 02 11:12:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Feb 02 11:12:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Feb 02 11:12:35 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb 02 11:12:35 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb 02 11:12:35 compute-0 sudo[89397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsgjloluoaongoohfjocxmwupxksmfdj ; /usr/bin/python3'
Feb 02 11:12:35 compute-0 sudo[89397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:35 compute-0 python3[89399]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.zebspe/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:36 compute-0 podman[89400]: 2026-02-02 11:12:36.016712036 +0000 UTC m=+0.039713101 container create 1e05dd942f8c583b898ed61b6083e4dc04c64d86791fe378b287d99779723c97 (image=quay.io/ceph/ceph:v19, name=beautiful_hypatia, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 11:12:36 compute-0 systemd[1]: Started libpod-conmon-1e05dd942f8c583b898ed61b6083e4dc04c64d86791fe378b287d99779723c97.scope.
Feb 02 11:12:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b203d17ffb1aeb9af16e517ea3487d6794831926be6ade66863065cbd472a89/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b203d17ffb1aeb9af16e517ea3487d6794831926be6ade66863065cbd472a89/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b203d17ffb1aeb9af16e517ea3487d6794831926be6ade66863065cbd472a89/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:36 compute-0 podman[89400]: 2026-02-02 11:12:36.079527923 +0000 UTC m=+0.102529018 container init 1e05dd942f8c583b898ed61b6083e4dc04c64d86791fe378b287d99779723c97 (image=quay.io/ceph/ceph:v19, name=beautiful_hypatia, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:12:36 compute-0 podman[89400]: 2026-02-02 11:12:36.084244057 +0000 UTC m=+0.107245122 container start 1e05dd942f8c583b898ed61b6083e4dc04c64d86791fe378b287d99779723c97 (image=quay.io/ceph/ceph:v19, name=beautiful_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:36 compute-0 podman[89400]: 2026-02-02 11:12:36.087358585 +0000 UTC m=+0.110359670 container attach 1e05dd942f8c583b898ed61b6083e4dc04c64d86791fe378b287d99779723c97 (image=quay.io/ceph/ceph:v19, name=beautiful_hypatia, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:36 compute-0 podman[89400]: 2026-02-02 11:12:36.000540446 +0000 UTC m=+0.023541531 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.zebspe/server_addr}] v 0)
Feb 02 11:12:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3081333229' entity='client.admin' 
Feb 02 11:12:36 compute-0 systemd[1]: libpod-1e05dd942f8c583b898ed61b6083e4dc04c64d86791fe378b287d99779723c97.scope: Deactivated successfully.
Feb 02 11:12:36 compute-0 podman[89400]: 2026-02-02 11:12:36.460062157 +0000 UTC m=+0.483063222 container died 1e05dd942f8c583b898ed61b6083e4dc04c64d86791fe378b287d99779723c97 (image=quay.io/ceph/ceph:v19, name=beautiful_hypatia, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:12:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b203d17ffb1aeb9af16e517ea3487d6794831926be6ade66863065cbd472a89-merged.mount: Deactivated successfully.
Feb 02 11:12:36 compute-0 podman[89400]: 2026-02-02 11:12:36.49004323 +0000 UTC m=+0.513044295 container remove 1e05dd942f8c583b898ed61b6083e4dc04c64d86791fe378b287d99779723c97 (image=quay.io/ceph/ceph:v19, name=beautiful_hypatia, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:12:36 compute-0 systemd[1]: libpod-conmon-1e05dd942f8c583b898ed61b6083e4dc04c64d86791fe378b287d99779723c97.scope: Deactivated successfully.
Feb 02 11:12:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Feb 02 11:12:36 compute-0 ceph-mon[74676]: 2.b deep-scrub starts
Feb 02 11:12:36 compute-0 ceph-mon[74676]: 2.b deep-scrub ok
Feb 02 11:12:36 compute-0 ceph-mon[74676]: 4.b scrub starts
Feb 02 11:12:36 compute-0 ceph-mon[74676]: 4.b scrub ok
Feb 02 11:12:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3604332567' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Feb 02 11:12:36 compute-0 ceph-mon[74676]: osdmap e35: 3 total, 3 up, 3 in
Feb 02 11:12:36 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Feb 02 11:12:36 compute-0 ceph-mon[74676]: 3.14 deep-scrub starts
Feb 02 11:12:36 compute-0 ceph-mon[74676]: 3.14 deep-scrub ok
Feb 02 11:12:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3081333229' entity='client.admin' 
Feb 02 11:12:36 compute-0 sudo[89397]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb 02 11:12:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Feb 02 11:12:36 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Feb 02 11:12:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:12:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:12:36 compute-0 sudo[89474]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wobuezvuziboowisxvbfcnbmfgsnzzbu ; /usr/bin/python3'
Feb 02 11:12:36 compute-0 sudo[89474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 02 11:12:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jqfvjy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb 02 11:12:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jqfvjy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:12:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jqfvjy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:12:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb 02 11:12:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:12:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:36 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.jqfvjy on compute-0
Feb 02 11:12:36 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.jqfvjy on compute-0
Feb 02 11:12:36 compute-0 sudo[89479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:36 compute-0 sudo[89479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:36 compute-0 sudo[89479]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:36 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb 02 11:12:36 compute-0 python3[89478]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:36 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb 02 11:12:36 compute-0 sudo[89506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:12:36 compute-0 sudo[89506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:36 compute-0 podman[89524]: 2026-02-02 11:12:36.876984007 +0000 UTC m=+0.094792388 container create 93d3ede6525d9e09147064b8cc047516705eadf5c01e8f7c494868e4d1885423 (image=quay.io/ceph/ceph:v19, name=wonderful_nobel, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:12:36 compute-0 systemd[1]: Started libpod-conmon-93d3ede6525d9e09147064b8cc047516705eadf5c01e8f7c494868e4d1885423.scope.
Feb 02 11:12:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecf31e61f346edc0e1f0341fedfba140a2e12d15e63b724356e9bb7d1ae25b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecf31e61f346edc0e1f0341fedfba140a2e12d15e63b724356e9bb7d1ae25b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecf31e61f346edc0e1f0341fedfba140a2e12d15e63b724356e9bb7d1ae25b1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:36 compute-0 podman[89524]: 2026-02-02 11:12:36.932410624 +0000 UTC m=+0.150219015 container init 93d3ede6525d9e09147064b8cc047516705eadf5c01e8f7c494868e4d1885423 (image=quay.io/ceph/ceph:v19, name=wonderful_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:36 compute-0 podman[89524]: 2026-02-02 11:12:36.936883881 +0000 UTC m=+0.154692262 container start 93d3ede6525d9e09147064b8cc047516705eadf5c01e8f7c494868e4d1885423 (image=quay.io/ceph/ceph:v19, name=wonderful_nobel, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:12:36 compute-0 podman[89524]: 2026-02-02 11:12:36.940489164 +0000 UTC m=+0.158297565 container attach 93d3ede6525d9e09147064b8cc047516705eadf5c01e8f7c494868e4d1885423 (image=quay.io/ceph/ceph:v19, name=wonderful_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:12:36 compute-0 podman[89524]: 2026-02-02 11:12:36.858088229 +0000 UTC m=+0.075896610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v99: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:37 compute-0 podman[89607]: 2026-02-02 11:12:37.169791637 +0000 UTC m=+0.032822335 container create 6a1bd867f7e17918ac9c6e7f0d53ed8f2b250a7303673eeb2f7b695a6030abb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:37 compute-0 systemd[1]: Started libpod-conmon-6a1bd867f7e17918ac9c6e7f0d53ed8f2b250a7303673eeb2f7b695a6030abb7.scope.
Feb 02 11:12:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:37 compute-0 podman[89607]: 2026-02-02 11:12:37.22405244 +0000 UTC m=+0.087083138 container init 6a1bd867f7e17918ac9c6e7f0d53ed8f2b250a7303673eeb2f7b695a6030abb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:37 compute-0 podman[89607]: 2026-02-02 11:12:37.227628812 +0000 UTC m=+0.090659510 container start 6a1bd867f7e17918ac9c6e7f0d53ed8f2b250a7303673eeb2f7b695a6030abb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:12:37 compute-0 sweet_cartwright[89624]: 167 167
Feb 02 11:12:37 compute-0 conmon[89624]: conmon 6a1bd867f7e17918ac9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a1bd867f7e17918ac9c6e7f0d53ed8f2b250a7303673eeb2f7b695a6030abb7.scope/container/memory.events
Feb 02 11:12:37 compute-0 systemd[1]: libpod-6a1bd867f7e17918ac9c6e7f0d53ed8f2b250a7303673eeb2f7b695a6030abb7.scope: Deactivated successfully.
Feb 02 11:12:37 compute-0 podman[89607]: 2026-02-02 11:12:37.232994224 +0000 UTC m=+0.096024912 container attach 6a1bd867f7e17918ac9c6e7f0d53ed8f2b250a7303673eeb2f7b695a6030abb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:37 compute-0 podman[89607]: 2026-02-02 11:12:37.233457538 +0000 UTC m=+0.096488236 container died 6a1bd867f7e17918ac9c6e7f0d53ed8f2b250a7303673eeb2f7b695a6030abb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb 02 11:12:37 compute-0 podman[89607]: 2026-02-02 11:12:37.155921252 +0000 UTC m=+0.018951950 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8edf2eeea8ba6becb39cba4b44008522d88f18a358f46eb7503c9eb635766e98-merged.mount: Deactivated successfully.
Feb 02 11:12:37 compute-0 podman[89607]: 2026-02-02 11:12:37.267612949 +0000 UTC m=+0.130643647 container remove 6a1bd867f7e17918ac9c6e7f0d53ed8f2b250a7303673eeb2f7b695a6030abb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:12:37 compute-0 systemd[1]: libpod-conmon-6a1bd867f7e17918ac9c6e7f0d53ed8f2b250a7303673eeb2f7b695a6030abb7.scope: Deactivated successfully.
Feb 02 11:12:37 compute-0 systemd[1]: Reloading.
Feb 02 11:12:37 compute-0 systemd-rc-local-generator[89662]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:12:37 compute-0 systemd-sysv-generator[89669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:12:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Feb 02 11:12:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2542422288' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Feb 02 11:12:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Feb 02 11:12:37 compute-0 ceph-mon[74676]: 2.3 scrub starts
Feb 02 11:12:37 compute-0 ceph-mon[74676]: 2.3 scrub ok
Feb 02 11:12:37 compute-0 ceph-mon[74676]: 4.17 deep-scrub starts
Feb 02 11:12:37 compute-0 ceph-mon[74676]: 4.17 deep-scrub ok
Feb 02 11:12:37 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb 02 11:12:37 compute-0 ceph-mon[74676]: osdmap e36: 3 total, 3 up, 3 in
Feb 02 11:12:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jqfvjy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:12:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jqfvjy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:12:37 compute-0 ceph-mon[74676]: 3.11 scrub starts
Feb 02 11:12:37 compute-0 ceph-mon[74676]: 3.11 scrub ok
Feb 02 11:12:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:37 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:37 compute-0 ceph-mon[74676]: Deploying daemon rgw.rgw.compute-0.jqfvjy on compute-0
Feb 02 11:12:37 compute-0 ceph-mon[74676]: pgmap v99: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:37 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2542422288' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Feb 02 11:12:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Feb 02 11:12:37 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Feb 02 11:12:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb 02 11:12:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb 02 11:12:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb 02 11:12:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb 02 11:12:37 compute-0 systemd[1]: Reloading.
Feb 02 11:12:37 compute-0 systemd-rc-local-generator[89707]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:12:37 compute-0 systemd-sysv-generator[89710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:12:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2542422288' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Feb 02 11:12:37 compute-0 wonderful_nobel[89546]: module 'dashboard' is already disabled
Feb 02 11:12:37 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.dhyzzj(active, since 2m), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:37 compute-0 podman[89524]: 2026-02-02 11:12:37.726476063 +0000 UTC m=+0.944284444 container died 93d3ede6525d9e09147064b8cc047516705eadf5c01e8f7c494868e4d1885423 (image=quay.io/ceph/ceph:v19, name=wonderful_nobel, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:37 compute-0 systemd[1]: libpod-93d3ede6525d9e09147064b8cc047516705eadf5c01e8f7c494868e4d1885423.scope: Deactivated successfully.
Feb 02 11:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ecf31e61f346edc0e1f0341fedfba140a2e12d15e63b724356e9bb7d1ae25b1-merged.mount: Deactivated successfully.
Feb 02 11:12:37 compute-0 podman[89524]: 2026-02-02 11:12:37.806605932 +0000 UTC m=+1.024414313 container remove 93d3ede6525d9e09147064b8cc047516705eadf5c01e8f7c494868e4d1885423 (image=quay.io/ceph/ceph:v19, name=wonderful_nobel, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:12:37 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.jqfvjy for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:12:37 compute-0 systemd[1]: libpod-conmon-93d3ede6525d9e09147064b8cc047516705eadf5c01e8f7c494868e4d1885423.scope: Deactivated successfully.
Feb 02 11:12:37 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.e scrub starts
Feb 02 11:12:37 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.e scrub ok
Feb 02 11:12:37 compute-0 sudo[89474]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:37 compute-0 sudo[89800]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suoewozdiksaeozbttrubbswzticfesz ; /usr/bin/python3'
Feb 02 11:12:37 compute-0 sudo[89800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:37 compute-0 podman[89806]: 2026-02-02 11:12:37.976510106 +0000 UTC m=+0.030506769 container create 79f18d781bf392c718646b015d310a1c7e380e52bd6ea698b77977244310b995 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-rgw-rgw-compute-0-jqfvjy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306f2c0ef37ca7b57615fcc969b538d71d2ca9a13fadd343a51a6d195878acbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306f2c0ef37ca7b57615fcc969b538d71d2ca9a13fadd343a51a6d195878acbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306f2c0ef37ca7b57615fcc969b538d71d2ca9a13fadd343a51a6d195878acbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306f2c0ef37ca7b57615fcc969b538d71d2ca9a13fadd343a51a6d195878acbf/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.jqfvjy supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:38 compute-0 podman[89806]: 2026-02-02 11:12:38.029898784 +0000 UTC m=+0.083895477 container init 79f18d781bf392c718646b015d310a1c7e380e52bd6ea698b77977244310b995 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-rgw-rgw-compute-0-jqfvjy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:38 compute-0 podman[89806]: 2026-02-02 11:12:38.035561596 +0000 UTC m=+0.089558259 container start 79f18d781bf392c718646b015d310a1c7e380e52bd6ea698b77977244310b995 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-rgw-rgw-compute-0-jqfvjy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:12:38 compute-0 bash[89806]: 79f18d781bf392c718646b015d310a1c7e380e52bd6ea698b77977244310b995
Feb 02 11:12:38 compute-0 podman[89806]: 2026-02-02 11:12:37.963619249 +0000 UTC m=+0.017615932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:12:38 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.jqfvjy for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:12:38 compute-0 python3[89807]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:38 compute-0 radosgw[89826]: deferred set uid:gid to 167:167 (ceph:ceph)
Feb 02 11:12:38 compute-0 radosgw[89826]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Feb 02 11:12:38 compute-0 radosgw[89826]: framework: beast
Feb 02 11:12:38 compute-0 radosgw[89826]: framework conf key: endpoint, val: 192.168.122.100:8082
Feb 02 11:12:38 compute-0 radosgw[89826]: init_numa not setting numa affinity
Feb 02 11:12:38 compute-0 sudo[89506]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:12:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 podman[89835]: 2026-02-02 11:12:38.147894921 +0000 UTC m=+0.045573647 container create 0f1e421f0171a52aa1af3fae8345daadb4d68d8c63c33ee05e30a3449e96d8b6 (image=quay.io/ceph/ceph:v19, name=eloquent_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:12:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 02 11:12:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 038fe0f3-ad93-4eaf-bcaa-313e6f088a4e (Updating rgw.rgw deployment (+3 -> 3))
Feb 02 11:12:38 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 038fe0f3-ad93-4eaf-bcaa-313e6f088a4e (Updating rgw.rgw deployment (+3 -> 3)) in 5 seconds
Feb 02 11:12:38 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb 02 11:12:38 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb 02 11:12:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 02 11:12:38 compute-0 systemd[1]: Started libpod-conmon-0f1e421f0171a52aa1af3fae8345daadb4d68d8c63c33ee05e30a3449e96d8b6.scope.
Feb 02 11:12:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 02 11:12:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97851dd2508c1add4fb451327f0e28aec50ebd86b1ece197ac94c291f054443/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97851dd2508c1add4fb451327f0e28aec50ebd86b1ece197ac94c291f054443/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97851dd2508c1add4fb451327f0e28aec50ebd86b1ece197ac94c291f054443/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:38 compute-0 podman[89835]: 2026-02-02 11:12:38.218617523 +0000 UTC m=+0.116296269 container init 0f1e421f0171a52aa1af3fae8345daadb4d68d8c63c33ee05e30a3449e96d8b6 (image=quay.io/ceph/ceph:v19, name=eloquent_mayer, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev b7851c58-7899-4de6-a9e2-07970dfcb9a7 (Updating node-exporter deployment (+3 -> 3))
Feb 02 11:12:38 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Feb 02 11:12:38 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Feb 02 11:12:38 compute-0 podman[89835]: 2026-02-02 11:12:38.126151593 +0000 UTC m=+0.023830339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:38 compute-0 podman[89835]: 2026-02-02 11:12:38.227839575 +0000 UTC m=+0.125518301 container start 0f1e421f0171a52aa1af3fae8345daadb4d68d8c63c33ee05e30a3449e96d8b6 (image=quay.io/ceph/ceph:v19, name=eloquent_mayer, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:12:38 compute-0 podman[89835]: 2026-02-02 11:12:38.231205451 +0000 UTC m=+0.128884197 container attach 0f1e421f0171a52aa1af3fae8345daadb4d68d8c63c33ee05e30a3449e96d8b6 (image=quay.io/ceph/ceph:v19, name=eloquent_mayer, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:38 compute-0 sudo[90433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:38 compute-0 sudo[90433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:38 compute-0 sudo[90433]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:38 compute-0 sudo[90458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:12:38 compute-0 sudo[90458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:38 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 12 completed events
Feb 02 11:12:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:12:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Feb 02 11:12:38 compute-0 ceph-mon[74676]: 2.d scrub starts
Feb 02 11:12:38 compute-0 ceph-mon[74676]: 4.9 scrub starts
Feb 02 11:12:38 compute-0 ceph-mon[74676]: 2.d scrub ok
Feb 02 11:12:38 compute-0 ceph-mon[74676]: 4.9 scrub ok
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3032189850' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4081522708' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb 02 11:12:38 compute-0 ceph-mon[74676]: osdmap e37: 3 total, 3 up, 3 in
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb 02 11:12:38 compute-0 ceph-mon[74676]: 5.9 scrub starts
Feb 02 11:12:38 compute-0 ceph-mon[74676]: 5.9 scrub ok
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2542422288' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Feb 02 11:12:38 compute-0 ceph-mon[74676]: mgrmap e12: compute-0.dhyzzj(active, since 2m), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mon[74676]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mon[74676]: Deploying daemon node-exporter.compute-0 on compute-0
Feb 02 11:12:38 compute-0 ceph-mon[74676]: from='mgr.14122 192.168.122.100:0/32123213' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 02 11:12:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 02 11:12:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Feb 02 11:12:38 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Feb 02 11:12:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Feb 02 11:12:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4065607743' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Feb 02 11:12:38 compute-0 systemd[1]: Reloading.
Feb 02 11:12:38 compute-0 systemd-rc-local-generator[90574]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:12:38 compute-0 systemd-sysv-generator[90583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:12:38 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.f scrub starts
Feb 02 11:12:38 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.f scrub ok
Feb 02 11:12:38 compute-0 systemd[1]: Reloading.
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v102: 195 pgs: 1 creating+peering, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.5 KiB/s wr, 8 op/s
Feb 02 11:12:39 compute-0 systemd-sysv-generator[90623]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:12:39 compute-0 systemd-rc-local-generator[90620]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:12:39 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:12:39 compute-0 bash[90679]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Feb 02 11:12:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Feb 02 11:12:39 compute-0 ceph-mon[74676]: 4.d scrub starts
Feb 02 11:12:39 compute-0 ceph-mon[74676]: 4.d scrub ok
Feb 02 11:12:39 compute-0 ceph-mon[74676]: 2.e scrub starts
Feb 02 11:12:39 compute-0 ceph-mon[74676]: 2.e scrub ok
Feb 02 11:12:39 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 02 11:12:39 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 02 11:12:39 compute-0 ceph-mon[74676]: osdmap e38: 3 total, 3 up, 3 in
Feb 02 11:12:39 compute-0 ceph-mon[74676]: 5.15 scrub starts
Feb 02 11:12:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4065607743' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Feb 02 11:12:39 compute-0 ceph-mon[74676]: 5.15 scrub ok
Feb 02 11:12:39 compute-0 ceph-mon[74676]: pgmap v102: 195 pgs: 1 creating+peering, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.5 KiB/s wr, 8 op/s
Feb 02 11:12:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Feb 02 11:12:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4065607743' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  1: '-n'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  2: 'mgr.compute-0.dhyzzj'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  3: '-f'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  4: '--setuser'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  5: 'ceph'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  6: '--setgroup'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  7: 'ceph'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  8: '--default-log-to-file=false'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  9: '--default-log-to-journald=true'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  10: '--default-log-to-stderr=false'
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr respawn  exe_path /proc/self/exe
Feb 02 11:12:39 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.dhyzzj(active, since 2m), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:39 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Feb 02 11:12:39 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 39 pg[10.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb 02 11:12:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb 02 11:12:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb 02 11:12:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb 02 11:12:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb 02 11:12:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb 02 11:12:39 compute-0 systemd[1]: libpod-0f1e421f0171a52aa1af3fae8345daadb4d68d8c63c33ee05e30a3449e96d8b6.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 podman[89835]: 2026-02-02 11:12:39.641257773 +0000 UTC m=+1.538936509 container died 0f1e421f0171a52aa1af3fae8345daadb4d68d8c63c33ee05e30a3449e96d8b6 (image=quay.io/ceph/ceph:v19, name=eloquent_mayer, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:12:39 compute-0 sshd-session[76255]: Connection closed by 192.168.122.100 port 43908
Feb 02 11:12:39 compute-0 sshd-session[76282]: Connection closed by 192.168.122.100 port 43912
Feb 02 11:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a97851dd2508c1add4fb451327f0e28aec50ebd86b1ece197ac94c291f054443-merged.mount: Deactivated successfully.
Feb 02 11:12:39 compute-0 sshd-session[76311]: Connection closed by 192.168.122.100 port 43922
Feb 02 11:12:39 compute-0 sshd-session[76110]: Connection closed by 192.168.122.100 port 43874
Feb 02 11:12:39 compute-0 sshd-session[76023]: Connection closed by 192.168.122.100 port 43846
Feb 02 11:12:39 compute-0 sshd-session[76226]: Connection closed by 192.168.122.100 port 43900
Feb 02 11:12:39 compute-0 sshd-session[76168]: Connection closed by 192.168.122.100 port 43890
Feb 02 11:12:39 compute-0 sshd-session[76081]: Connection closed by 192.168.122.100 port 43860
Feb 02 11:12:39 compute-0 sshd-session[76052]: Connection closed by 192.168.122.100 port 43856
Feb 02 11:12:39 compute-0 sshd-session[76021]: Connection closed by 192.168.122.100 port 43834
Feb 02 11:12:39 compute-0 sshd-session[76197]: Connection closed by 192.168.122.100 port 43892
Feb 02 11:12:39 compute-0 sshd-session[76139]: Connection closed by 192.168.122.100 port 43886
Feb 02 11:12:39 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 sshd-session[76279]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 sshd-session[76107]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 sshd-session[76223]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 sshd-session[76136]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 26 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 sshd-session[76308]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 30 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 32 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 27 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 33 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 26.
Feb 02 11:12:39 compute-0 sshd-session[76049]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 sshd-session[76165]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 32.
Feb 02 11:12:39 compute-0 sshd-session[76017]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 30.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 28 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 24 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 23 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 sshd-session[76252]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 sshd-session[76000]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 sshd-session[76194]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 podman[89835]: 2026-02-02 11:12:39.693789068 +0000 UTC m=+1.591467794 container remove 0f1e421f0171a52aa1af3fae8345daadb4d68d8c63c33ee05e30a3449e96d8b6 (image=quay.io/ceph/ceph:v19, name=eloquent_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:12:39 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 sshd-session[76078]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 31 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 21 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 29 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 27.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Session 25 logged out. Waiting for processes to exit.
Feb 02 11:12:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ignoring --setuser ceph since I am not root
Feb 02 11:12:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ignoring --setgroup ceph since I am not root
Feb 02 11:12:39 compute-0 systemd[1]: libpod-conmon-0f1e421f0171a52aa1af3fae8345daadb4d68d8c63c33ee05e30a3449e96d8b6.scope: Deactivated successfully.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 28.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 24.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 23.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 21.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 31.
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 29.
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb 02 11:12:39 compute-0 systemd-logind[793]: Removed session 25.
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: pidfile_write: ignore empty --pid-file
Feb 02 11:12:39 compute-0 sudo[89800]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'alerts'
Feb 02 11:12:39 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Feb 02 11:12:39 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:12:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:39.839+0000 7fba31682140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'balancer'
Feb 02 11:12:39 compute-0 sudo[90747]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anmcsqpmqiimwznqpgmiisjqfdnevyir ; /usr/bin/python3'
Feb 02 11:12:39 compute-0 sudo[90747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:12:39 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'cephadm'
Feb 02 11:12:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:39.942+0000 7fba31682140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:12:40 compute-0 python3[90749]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:40 compute-0 podman[90750]: 2026-02-02 11:12:40.113150526 +0000 UTC m=+0.042659955 container create 9dfa20960b8fc2144a593f40a2b1670706818212d05c77d0e9249e0b107cd533 (image=quay.io/ceph/ceph:v19, name=relaxed_herschel, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:40 compute-0 systemd[1]: Started libpod-conmon-9dfa20960b8fc2144a593f40a2b1670706818212d05c77d0e9249e0b107cd533.scope.
Feb 02 11:12:40 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c3255964eae04801d6cfaae7a7dbbdd6ebfc7ec61a6c7b3cca15247e1480ae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c3255964eae04801d6cfaae7a7dbbdd6ebfc7ec61a6c7b3cca15247e1480ae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c3255964eae04801d6cfaae7a7dbbdd6ebfc7ec61a6c7b3cca15247e1480ae/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:40 compute-0 podman[90750]: 2026-02-02 11:12:40.1835802 +0000 UTC m=+0.113089659 container init 9dfa20960b8fc2144a593f40a2b1670706818212d05c77d0e9249e0b107cd533 (image=quay.io/ceph/ceph:v19, name=relaxed_herschel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:12:40 compute-0 podman[90750]: 2026-02-02 11:12:40.091642984 +0000 UTC m=+0.021152443 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:40 compute-0 podman[90750]: 2026-02-02 11:12:40.190500867 +0000 UTC m=+0.120010296 container start 9dfa20960b8fc2144a593f40a2b1670706818212d05c77d0e9249e0b107cd533 (image=quay.io/ceph/ceph:v19, name=relaxed_herschel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:40 compute-0 podman[90750]: 2026-02-02 11:12:40.19553795 +0000 UTC m=+0.125047369 container attach 9dfa20960b8fc2144a593f40a2b1670706818212d05c77d0e9249e0b107cd533 (image=quay.io/ceph/ceph:v19, name=relaxed_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:12:40 compute-0 bash[90679]: Getting image source signatures
Feb 02 11:12:40 compute-0 bash[90679]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Feb 02 11:12:40 compute-0 bash[90679]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Feb 02 11:12:40 compute-0 bash[90679]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Feb 02 11:12:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:12:40 compute-0 ceph-mon[74676]: 2.f scrub starts
Feb 02 11:12:40 compute-0 ceph-mon[74676]: 2.f scrub ok
Feb 02 11:12:40 compute-0 ceph-mon[74676]: 4.0 scrub starts
Feb 02 11:12:40 compute-0 ceph-mon[74676]: 4.0 scrub ok
Feb 02 11:12:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4065607743' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Feb 02 11:12:40 compute-0 ceph-mon[74676]: 5.14 scrub starts
Feb 02 11:12:40 compute-0 ceph-mon[74676]: 5.14 scrub ok
Feb 02 11:12:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4081522708' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb 02 11:12:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3032189850' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb 02 11:12:40 compute-0 ceph-mon[74676]: mgrmap e13: compute-0.dhyzzj(active, since 2m), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:40 compute-0 ceph-mon[74676]: osdmap e39: 3 total, 3 up, 3 in
Feb 02 11:12:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb 02 11:12:40 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb 02 11:12:40 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb 02 11:12:40 compute-0 ceph-mon[74676]: 4.7 scrub starts
Feb 02 11:12:40 compute-0 ceph-mon[74676]: 4.7 scrub ok
Feb 02 11:12:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Feb 02 11:12:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 02 11:12:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 02 11:12:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 02 11:12:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Feb 02 11:12:40 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Feb 02 11:12:40 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 40 pg[10.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:40 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb 02 11:12:40 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb 02 11:12:40 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'crash'
Feb 02 11:12:40 compute-0 ceph-mgr[74969]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:12:40 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'dashboard'
Feb 02 11:12:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:40.933+0000 7fba31682140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:12:40 compute-0 bash[90679]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Feb 02 11:12:40 compute-0 bash[90679]: Writing manifest to image destination
Feb 02 11:12:41 compute-0 podman[90679]: 2026-02-02 11:12:41.048076171 +0000 UTC m=+1.660463994 container create c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23469e64d81a1aaed67c0843508277e963ab529774b210155668103668696b2f/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:41 compute-0 podman[90679]: 2026-02-02 11:12:41.104975979 +0000 UTC m=+1.717363822 container init c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:12:41 compute-0 podman[90679]: 2026-02-02 11:12:41.109754893 +0000 UTC m=+1.722142716 container start c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:12:41 compute-0 podman[90679]: 2026-02-02 11:12:41.034945614 +0000 UTC m=+1.647333467 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Feb 02 11:12:41 compute-0 bash[90679]: c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.116Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Feb 02 11:12:41 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.119Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:117 level=info collector=arp
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:117 level=info collector=bcache
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:117 level=info collector=bonding
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:117 level=info collector=btrfs
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:117 level=info collector=conntrack
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:117 level=info collector=cpu
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:117 level=info collector=cpufreq
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:117 level=info collector=diskstats
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:117 level=info collector=dmi
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.120Z caller=node_exporter.go:117 level=info collector=edac
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=entropy
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=fibrechannel
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=filefd
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=filesystem
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=hwmon
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=infiniband
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=ipvs
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=loadavg
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=mdadm
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=meminfo
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=netclass
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=netdev
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=netstat
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=nfs
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=nfsd
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=nvme
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=os
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=pressure
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=rapl
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=schedstat
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=selinux
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=sockstat
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=softnet
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=stat
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=tapestats
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=textfile
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=thermal_zone
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=time
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=udp_queues
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=uname
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=vmstat
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=xfs
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.121Z caller=node_exporter.go:117 level=info collector=zfs
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.123Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[90869]: ts=2026-02-02T11:12:41.123Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Feb 02 11:12:41 compute-0 sudo[90458]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:41 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Feb 02 11:12:41 compute-0 systemd[1]: session-33.scope: Consumed 22.190s CPU time.
Feb 02 11:12:41 compute-0 systemd-logind[793]: Removed session 33.
Feb 02 11:12:41 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'devicehealth'
Feb 02 11:12:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Feb 02 11:12:41 compute-0 ceph-mon[74676]: 2.10 scrub starts
Feb 02 11:12:41 compute-0 ceph-mon[74676]: 2.10 scrub ok
Feb 02 11:12:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 02 11:12:41 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 02 11:12:41 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 02 11:12:41 compute-0 ceph-mon[74676]: osdmap e40: 3 total, 3 up, 3 in
Feb 02 11:12:41 compute-0 ceph-mon[74676]: 3.10 deep-scrub starts
Feb 02 11:12:41 compute-0 ceph-mon[74676]: 3.10 deep-scrub ok
Feb 02 11:12:41 compute-0 ceph-mon[74676]: 4.c scrub starts
Feb 02 11:12:41 compute-0 ceph-mon[74676]: 4.c scrub ok
Feb 02 11:12:41 compute-0 ceph-mgr[74969]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:41.640+0000 7fba31682140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:12:41 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'diskprediction_local'
Feb 02 11:12:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Feb 02 11:12:41 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Feb 02 11:12:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb 02 11:12:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb 02 11:12:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb 02 11:12:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb 02 11:12:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb 02 11:12:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb 02 11:12:41 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.c scrub starts
Feb 02 11:12:41 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.c scrub ok
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   from numpy import show_config as show_numpy_config
Feb 02 11:12:41 compute-0 ceph-mgr[74969]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:41.809+0000 7fba31682140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:12:41 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'influx'
Feb 02 11:12:41 compute-0 ceph-mgr[74969]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:12:41 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'insights'
Feb 02 11:12:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:41.884+0000 7fba31682140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:12:41 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'iostat'
Feb 02 11:12:42 compute-0 ceph-mgr[74969]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:12:42 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'k8sevents'
Feb 02 11:12:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:42.026+0000 7fba31682140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:12:42 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'localpool'
Feb 02 11:12:42 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mds_autoscaler'
Feb 02 11:12:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Feb 02 11:12:42 compute-0 ceph-mon[74676]: 2.4 scrub starts
Feb 02 11:12:42 compute-0 ceph-mon[74676]: 2.4 scrub ok
Feb 02 11:12:42 compute-0 ceph-mon[74676]: 3.f scrub starts
Feb 02 11:12:42 compute-0 ceph-mon[74676]: 3.f scrub ok
Feb 02 11:12:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3032189850' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb 02 11:12:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4081522708' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb 02 11:12:42 compute-0 ceph-mon[74676]: osdmap e41: 3 total, 3 up, 3 in
Feb 02 11:12:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb 02 11:12:42 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb 02 11:12:42 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb 02 11:12:42 compute-0 ceph-mon[74676]: 4.2 scrub starts
Feb 02 11:12:42 compute-0 ceph-mon[74676]: 4.2 scrub ok
Feb 02 11:12:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 02 11:12:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 02 11:12:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 02 11:12:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Feb 02 11:12:42 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Feb 02 11:12:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb 02 11:12:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb 02 11:12:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb 02 11:12:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb 02 11:12:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb 02 11:12:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb 02 11:12:42 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mirroring'
Feb 02 11:12:42 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Feb 02 11:12:42 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Feb 02 11:12:42 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'nfs'
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:43.022+0000 7fba31682140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'orchestrator'
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_perf_query'
Feb 02 11:12:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:43.252+0000 7fba31682140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:43.333+0000 7fba31682140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_support'
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:43.398+0000 7fba31682140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'pg_autoscaler'
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:43.477+0000 7fba31682140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'progress'
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'prometheus'
Feb 02 11:12:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:43.551+0000 7fba31682140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Feb 02 11:12:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 02 11:12:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 02 11:12:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 02 11:12:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Feb 02 11:12:43 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Feb 02 11:12:43 compute-0 ceph-mon[74676]: 2.c scrub starts
Feb 02 11:12:43 compute-0 ceph-mon[74676]: 2.c scrub ok
Feb 02 11:12:43 compute-0 ceph-mon[74676]: 5.17 scrub starts
Feb 02 11:12:43 compute-0 ceph-mon[74676]: 5.17 scrub ok
Feb 02 11:12:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 02 11:12:43 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 02 11:12:43 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 02 11:12:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4081522708' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb 02 11:12:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3032189850' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb 02 11:12:43 compute-0 ceph-mon[74676]: osdmap e42: 3 total, 3 up, 3 in
Feb 02 11:12:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb 02 11:12:43 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb 02 11:12:43 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb 02 11:12:43 compute-0 ceph-mon[74676]: 4.a scrub starts
Feb 02 11:12:43 compute-0 ceph-mon[74676]: 4.a scrub ok
Feb 02 11:12:43 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Feb 02 11:12:43 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:43.912+0000 7fba31682140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:12:43 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rbd_support'
Feb 02 11:12:43 compute-0 radosgw[89826]: v1 topic migration: starting v1 topic migration..
Feb 02 11:12:43 compute-0 radosgw[89826]: LDAP not started since no server URIs were provided in the configuration.
Feb 02 11:12:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-rgw-rgw-compute-0-jqfvjy[89822]: 2026-02-02T11:12:43.913+0000 7fe13c52c980 -1 LDAP not started since no server URIs were provided in the configuration.
Feb 02 11:12:43 compute-0 radosgw[89826]: v1 topic migration: finished v1 topic migration
Feb 02 11:12:43 compute-0 radosgw[89826]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Feb 02 11:12:43 compute-0 radosgw[89826]: framework: beast
Feb 02 11:12:43 compute-0 radosgw[89826]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Feb 02 11:12:43 compute-0 radosgw[89826]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Feb 02 11:12:43 compute-0 radosgw[89826]: starting handler: beast
Feb 02 11:12:43 compute-0 radosgw[89826]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 11:12:43 compute-0 radosgw[89826]: mgrc service_daemon_register rgw.14382 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.jqfvjy,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=84bebf22-a60c-4c1e-abe8-47242680dd4d,zone_name=default,zonegroup_id=294c55f9-f7f9-445d-9954-ab8641436668,zonegroup_name=default}
Feb 02 11:12:44 compute-0 ceph-mgr[74969]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:12:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:44.025+0000 7fba31682140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:12:44 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'restful'
Feb 02 11:12:44 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rgw'
Feb 02 11:12:44 compute-0 ceph-mgr[74969]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:12:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:44.517+0000 7fba31682140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:12:44 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rook'
Feb 02 11:12:44 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Feb 02 11:12:44 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Feb 02 11:12:44 compute-0 ceph-mon[74676]: 2.11 scrub starts
Feb 02 11:12:44 compute-0 ceph-mon[74676]: 2.11 scrub ok
Feb 02 11:12:44 compute-0 ceph-mon[74676]: 5.16 deep-scrub starts
Feb 02 11:12:44 compute-0 ceph-mon[74676]: 5.16 deep-scrub ok
Feb 02 11:12:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4235536652' entity='client.rgw.rgw.compute-0.jqfvjy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 02 11:12:44 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-2.xfsamf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 02 11:12:44 compute-0 ceph-mon[74676]: from='client.? ' entity='client.rgw.rgw.compute-1.jqjceq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 02 11:12:44 compute-0 ceph-mon[74676]: osdmap e43: 3 total, 3 up, 3 in
Feb 02 11:12:44 compute-0 ceph-mon[74676]: 2.14 scrub starts
Feb 02 11:12:44 compute-0 ceph-mon[74676]: 2.14 scrub ok
Feb 02 11:12:44 compute-0 ceph-mon[74676]: 4.1 scrub starts
Feb 02 11:12:44 compute-0 ceph-mon[74676]: 4.1 scrub ok
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'selftest'
Feb 02 11:12:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:45.137+0000 7fba31682140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'snap_schedule'
Feb 02 11:12:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:45.222+0000 7fba31682140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:45.318+0000 7fba31682140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'stats'
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'status'
Feb 02 11:12:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telegraf'
Feb 02 11:12:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:45.504+0000 7fba31682140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telemetry'
Feb 02 11:12:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:45.587+0000 7fba31682140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb 02 11:12:45 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb 02 11:12:45 compute-0 ceph-mon[74676]: 3.13 scrub starts
Feb 02 11:12:45 compute-0 ceph-mon[74676]: 3.13 scrub ok
Feb 02 11:12:45 compute-0 ceph-mon[74676]: 2.12 scrub starts
Feb 02 11:12:45 compute-0 ceph-mon[74676]: 2.12 scrub ok
Feb 02 11:12:45 compute-0 ceph-mon[74676]: 4.6 scrub starts
Feb 02 11:12:45 compute-0 ceph-mon[74676]: 4.6 scrub ok
Feb 02 11:12:45 compute-0 ceph-mon[74676]: 3.d scrub starts
Feb 02 11:12:45 compute-0 ceph-mon[74676]: 3.d scrub ok
Feb 02 11:12:45 compute-0 ceph-mon[74676]: 2.17 scrub starts
Feb 02 11:12:45 compute-0 ceph-mon[74676]: 2.17 scrub ok
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:12:45 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'test_orchestrator'
Feb 02 11:12:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:45.787+0000 7fba31682140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.iybsjv restarted
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.iybsjv started
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'volumes'
Feb 02 11:12:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:46.061+0000 7fba31682140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:12:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:46.388+0000 7fba31682140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'zabbix'
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:12:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:46.477+0000 7fba31682140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dhyzzj restarted
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: ms_deliver_dispatch: unhandled message 0x55eb859b1860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dhyzzj
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.dhyzzj(active, starting, since 0.0303779s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr handle_mgr_map Activating!
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr handle_mgr_map I am now activating
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e1 all = 1
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: balancer
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [balancer INFO root] Starting
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Manager daemon compute-0.dhyzzj is now available
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:12:46
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: cephadm
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: crash
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: dashboard
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO access_control] Loading user roles DB version=2
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: devicehealth
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Starting
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO sso] Loading SSO DB version=1
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: iostat
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO root] Configured CherryPy, starting engine...
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: nfs
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: orchestrator
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: pg_autoscaler
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: progress
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [progress INFO root] Loading...
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fb9b09d7f40>, <progress.module.GhostEvent object at 0x7fb9b09d7f70>, <progress.module.GhostEvent object at 0x7fb9b09d7fa0>, <progress.module.GhostEvent object at 0x7fb9b09d7fd0>, <progress.module.GhostEvent object at 0x7fb9ae97d040>, <progress.module.GhostEvent object at 0x7fb9ae97d070>, <progress.module.GhostEvent object at 0x7fb9ae97d0a0>, <progress.module.GhostEvent object at 0x7fb9ae97d0d0>, <progress.module.GhostEvent object at 0x7fb9ae97d100>, <progress.module.GhostEvent object at 0x7fb9ae97d130>, <progress.module.GhostEvent object at 0x7fb9ae97d160>, <progress.module.GhostEvent object at 0x7fb9ae97d190>] historic events
Feb 02 11:12:46 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [progress INFO root] Loaded OSDMap, ready.
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] recovery thread starting
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] starting setup
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: rbd_support
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:12:46 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zebspe restarted
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zebspe started
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: restful
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [restful INFO root] server_addr: :: server_port: 8003
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [restful WARNING root] server not running: no certificate configured
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: status
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: telemetry
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] PerfHandler: starting
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: vms, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: volumes, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: backups, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: volumes
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: images, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TaskHandler: starting
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"} v 0)
Feb 02 11:12:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [rbd_support INFO root] setup complete
Feb 02 11:12:46 compute-0 ceph-mon[74676]: 4.4 scrub starts
Feb 02 11:12:46 compute-0 ceph-mon[74676]: 4.4 scrub ok
Feb 02 11:12:46 compute-0 ceph-mon[74676]: Standby manager daemon compute-1.iybsjv restarted
Feb 02 11:12:46 compute-0 ceph-mon[74676]: Standby manager daemon compute-1.iybsjv started
Feb 02 11:12:46 compute-0 ceph-mon[74676]: Active manager daemon compute-0.dhyzzj restarted
Feb 02 11:12:46 compute-0 ceph-mon[74676]: Activating manager daemon compute-0.dhyzzj
Feb 02 11:12:46 compute-0 ceph-mon[74676]: osdmap e44: 3 total, 3 up, 3 in
Feb 02 11:12:46 compute-0 ceph-mon[74676]: mgrmap e14: compute-0.dhyzzj(active, starting, since 0.0303779s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:46 compute-0 ceph-mon[74676]: 5.8 scrub starts
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: 5.8 scrub ok
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: Manager daemon compute-0.dhyzzj is now available
Feb 02 11:12:46 compute-0 ceph-mon[74676]: 2.18 scrub starts
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mon[74676]: 2.18 scrub ok
Feb 02 11:12:46 compute-0 ceph-mon[74676]: Standby manager daemon compute-2.zebspe restarted
Feb 02 11:12:46 compute-0 ceph-mon[74676]: Standby manager daemon compute-2.zebspe started
Feb 02 11:12:46 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"}]: dispatch
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Feb 02 11:12:46 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Feb 02 11:12:47 compute-0 sshd-session[91028]: Accepted publickey for ceph-admin from 192.168.122.100 port 38492 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:12:47 compute-0 systemd-logind[793]: New session 34 of user ceph-admin.
Feb 02 11:12:47 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Feb 02 11:12:47 compute-0 sshd-session[91028]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:12:47 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.module] Engine started.
Feb 02 11:12:47 compute-0 sudo[91044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:47 compute-0 sudo[91044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:47 compute-0 sudo[91044]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:47 compute-0 sudo[91069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:12:47 compute-0 sudo[91069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:47 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14409 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:47 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.dhyzzj(active, since 1.05704s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Feb 02 11:12:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:47 compute-0 relaxed_herschel[90766]: Option GRAFANA_API_USERNAME updated
Feb 02 11:12:47 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb 02 11:12:47 compute-0 systemd[1]: libpod-9dfa20960b8fc2144a593f40a2b1670706818212d05c77d0e9249e0b107cd533.scope: Deactivated successfully.
Feb 02 11:12:47 compute-0 podman[90750]: 2026-02-02 11:12:47.59042917 +0000 UTC m=+7.519938599 container died 9dfa20960b8fc2144a593f40a2b1670706818212d05c77d0e9249e0b107cd533 (image=quay.io/ceph/ceph:v19, name=relaxed_herschel, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:12:47 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb 02 11:12:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-42c3255964eae04801d6cfaae7a7dbbdd6ebfc7ec61a6c7b3cca15247e1480ae-merged.mount: Deactivated successfully.
Feb 02 11:12:47 compute-0 podman[90750]: 2026-02-02 11:12:47.630777946 +0000 UTC m=+7.560287375 container remove 9dfa20960b8fc2144a593f40a2b1670706818212d05c77d0e9249e0b107cd533 (image=quay.io/ceph/ceph:v19, name=relaxed_herschel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:47 compute-0 systemd[1]: libpod-conmon-9dfa20960b8fc2144a593f40a2b1670706818212d05c77d0e9249e0b107cd533.scope: Deactivated successfully.
Feb 02 11:12:47 compute-0 sudo[90747]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:47 compute-0 sudo[91199]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nefxcsqxrmnfnnamplifgczofqstzlqu ; /usr/bin/python3'
Feb 02 11:12:47 compute-0 sudo[91199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:47 compute-0 podman[91201]: 2026-02-02 11:12:47.828908367 +0000 UTC m=+0.053393492 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:12:47 compute-0 python3[91203]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Feb 02 11:12:47 compute-0 podman[91201]: 2026-02-02 11:12:47.948068223 +0000 UTC m=+0.172553338 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:12:47 compute-0 podman[91223]: 2026-02-02 11:12:47.97339754 +0000 UTC m=+0.043557307 container create 7f2fb73711de19d59b004cb84997ccefa0b2a12cf61bb683f647b1237a6e64a9 (image=quay.io/ceph/ceph:v19, name=peaceful_knuth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:12:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 systemd[1]: Started libpod-conmon-7f2fb73711de19d59b004cb84997ccefa0b2a12cf61bb683f647b1237a6e64a9.scope.
Feb 02 11:12:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6cb2c850306cb7207b39b3026230ebd8375f98d081bb6dc5be4b4359ac5f8c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6cb2c850306cb7207b39b3026230ebd8375f98d081bb6dc5be4b4359ac5f8c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6cb2c850306cb7207b39b3026230ebd8375f98d081bb6dc5be4b4359ac5f8c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:48 compute-0 podman[91223]: 2026-02-02 11:12:47.954552384 +0000 UTC m=+0.024712171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:48 compute-0 podman[91223]: 2026-02-02 11:12:48.069461542 +0000 UTC m=+0.139621329 container init 7f2fb73711de19d59b004cb84997ccefa0b2a12cf61bb683f647b1237a6e64a9 (image=quay.io/ceph/ceph:v19, name=peaceful_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb 02 11:12:48 compute-0 podman[91223]: 2026-02-02 11:12:48.075012887 +0000 UTC m=+0.145172654 container start 7f2fb73711de19d59b004cb84997ccefa0b2a12cf61bb683f647b1237a6e64a9 (image=quay.io/ceph/ceph:v19, name=peaceful_knuth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb 02 11:12:48 compute-0 podman[91223]: 2026-02-02 11:12:48.078387961 +0000 UTC m=+0.148547728 container attach 7f2fb73711de19d59b004cb84997ccefa0b2a12cf61bb683f647b1237a6e64a9 (image=quay.io/ceph/ceph:v19, name=peaceful_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 podman[91376]: 2026-02-02 11:12:48.432196688 +0000 UTC m=+0.054923405 container exec c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:12:48 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:48 compute-0 podman[91376]: 2026-02-02 11:12:48.441990011 +0000 UTC m=+0.064716728 container exec_died c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 peaceful_knuth[91268]: Option GRAFANA_API_PASSWORD updated
Feb 02 11:12:48 compute-0 systemd[1]: libpod-7f2fb73711de19d59b004cb84997ccefa0b2a12cf61bb683f647b1237a6e64a9.scope: Deactivated successfully.
Feb 02 11:12:48 compute-0 podman[91223]: 2026-02-02 11:12:48.500189926 +0000 UTC m=+0.570349693 container died 7f2fb73711de19d59b004cb84997ccefa0b2a12cf61bb683f647b1237a6e64a9 (image=quay.io/ceph/ceph:v19, name=peaceful_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:48 compute-0 sudo[91069]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa6cb2c850306cb7207b39b3026230ebd8375f98d081bb6dc5be4b4359ac5f8c-merged.mount: Deactivated successfully.
Feb 02 11:12:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:48 compute-0 ceph-mon[74676]: 4.3 scrub starts
Feb 02 11:12:48 compute-0 ceph-mon[74676]: 4.3 scrub ok
Feb 02 11:12:48 compute-0 ceph-mon[74676]: from='client.14409 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mgrmap e15: compute-0.dhyzzj(active, since 1.05704s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:48 compute-0 ceph-mon[74676]: pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:48 compute-0 ceph-mon[74676]: 3.b scrub starts
Feb 02 11:12:48 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 ceph-mon[74676]: 3.b scrub ok
Feb 02 11:12:48 compute-0 ceph-mon[74676]: 2.16 scrub starts
Feb 02 11:12:48 compute-0 ceph-mon[74676]: 2.16 scrub ok
Feb 02 11:12:48 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 podman[91223]: 2026-02-02 11:12:48.546764146 +0000 UTC m=+0.616923913 container remove 7f2fb73711de19d59b004cb84997ccefa0b2a12cf61bb683f647b1237a6e64a9 (image=quay.io/ceph/ceph:v19, name=peaceful_knuth, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:12:48 compute-0 systemd[1]: libpod-conmon-7f2fb73711de19d59b004cb84997ccefa0b2a12cf61bb683f647b1237a6e64a9.scope: Deactivated successfully.
Feb 02 11:12:48 compute-0 sudo[91199]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 sudo[91425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:48 compute-0 sudo[91425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:48 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Feb 02 11:12:48 compute-0 sudo[91425]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:48 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Feb 02 11:12:48 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Check health
Feb 02 11:12:48 compute-0 sudo[91461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:12:48 compute-0 sudo[91461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:48 compute-0 sudo[91509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vusbypqmbrgynmxswhfbteywggvdwrrh ; /usr/bin/python3'
Feb 02 11:12:48 compute-0 sudo[91509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:12:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Feb 02 11:12:48 compute-0 python3[91511]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:48 compute-0 podman[91527]: 2026-02-02 11:12:48.931053803 +0000 UTC m=+0.045801859 container create d904d8ce08b76390fcbd78aa152322e9d187b1d182ff6ff2b8a5a772ed40449d (image=quay.io/ceph/ceph:v19, name=stupefied_gates, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:48 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:12:48] ENGINE Bus STARTING
Feb 02 11:12:48 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:12:48] ENGINE Bus STARTING
Feb 02 11:12:48 compute-0 systemd[1]: Started libpod-conmon-d904d8ce08b76390fcbd78aa152322e9d187b1d182ff6ff2b8a5a772ed40449d.scope.
Feb 02 11:12:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ec50e678347d79d54f929c471d19a59295dc5065b3048d37a65d4047a26d97/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ec50e678347d79d54f929c471d19a59295dc5065b3048d37a65d4047a26d97/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ec50e678347d79d54f929c471d19a59295dc5065b3048d37a65d4047a26d97/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:49 compute-0 podman[91527]: 2026-02-02 11:12:48.907911327 +0000 UTC m=+0.022659403 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:49 compute-0 podman[91527]: 2026-02-02 11:12:49.009879734 +0000 UTC m=+0.124627810 container init d904d8ce08b76390fcbd78aa152322e9d187b1d182ff6ff2b8a5a772ed40449d (image=quay.io/ceph/ceph:v19, name=stupefied_gates, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:49 compute-0 podman[91527]: 2026-02-02 11:12:49.014990996 +0000 UTC m=+0.129739052 container start d904d8ce08b76390fcbd78aa152322e9d187b1d182ff6ff2b8a5a772ed40449d (image=quay.io/ceph/ceph:v19, name=stupefied_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb 02 11:12:49 compute-0 podman[91527]: 2026-02-02 11:12:49.018544616 +0000 UTC m=+0.133292702 container attach d904d8ce08b76390fcbd78aa152322e9d187b1d182ff6ff2b8a5a772ed40449d (image=quay.io/ceph/ceph:v19, name=stupefied_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:12:49] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:12:49] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:12:49 compute-0 sudo[91461]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 sudo[91583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:12:49 compute-0 sudo[91583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:49 compute-0 sudo[91583]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 sudo[91631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Feb 02 11:12:49 compute-0 sudo[91631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:12:49] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:12:49] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:12:49] ENGINE Bus STARTED
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:12:49] ENGINE Bus STARTED
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:12:49] ENGINE Client ('192.168.122.100', 34732) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:12:49] ENGINE Client ('192.168.122.100', 34732) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14445 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 stupefied_gates[91553]: Option ALERTMANAGER_API_HOST updated
Feb 02 11:12:49 compute-0 systemd[1]: libpod-d904d8ce08b76390fcbd78aa152322e9d187b1d182ff6ff2b8a5a772ed40449d.scope: Deactivated successfully.
Feb 02 11:12:49 compute-0 podman[91527]: 2026-02-02 11:12:49.434183798 +0000 UTC m=+0.548931864 container died d904d8ce08b76390fcbd78aa152322e9d187b1d182ff6ff2b8a5a772ed40449d (image=quay.io/ceph/ceph:v19, name=stupefied_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:12:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-44ec50e678347d79d54f929c471d19a59295dc5065b3048d37a65d4047a26d97-merged.mount: Deactivated successfully.
Feb 02 11:12:49 compute-0 podman[91527]: 2026-02-02 11:12:49.465793121 +0000 UTC m=+0.580541177 container remove d904d8ce08b76390fcbd78aa152322e9d187b1d182ff6ff2b8a5a772ed40449d (image=quay.io/ceph/ceph:v19, name=stupefied_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:49 compute-0 systemd[1]: libpod-conmon-d904d8ce08b76390fcbd78aa152322e9d187b1d182ff6ff2b8a5a772ed40449d.scope: Deactivated successfully.
Feb 02 11:12:49 compute-0 sudo[91509]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 sudo[91631]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.dhyzzj(active, since 3s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Feb 02 11:12:49 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.19( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.858200073s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.746391296s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.19( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.858159065s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.746391296s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.222443581s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.110916138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.222431183s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.110916138s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.18( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.856844902s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.745422363s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.18( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.856831551s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.745422363s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.222006798s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.110923767s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221993446s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.110923767s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.857298851s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.746398926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.857280731s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.746398926s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.210925102s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.100051880s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.210860252s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.100051880s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.1f( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221974373s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.111534119s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.222039223s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.111602783s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221942902s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.111534119s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.13( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.855718613s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.745361328s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.13( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.855703354s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.745361328s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221853256s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.111610413s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221842766s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.111610413s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.12( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.855553627s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.745376587s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.222616196s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112464905s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.222605705s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112464905s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221701622s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.111602783s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.12( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.855525970s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.745376587s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.10( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.855043411s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.744964600s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.10( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.855024338s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.744964600s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.f( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.854936600s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.744956970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.f( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.854925156s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.744956970s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.e( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.854592323s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.744682312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221579552s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.111694336s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.e( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.854573250s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.744682312s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.222173691s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112304688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221566200s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.111694336s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.854315758s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.744529724s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.854299545s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.744529724s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221940041s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112236023s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221879005s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112236023s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221884727s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112358093s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.c( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.854465485s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.744941711s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221872330s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112358093s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.c( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.854440689s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.744941711s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.b( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.853991508s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.744567871s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.b( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.853974342s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.744567871s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221733093s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112388611s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221714973s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112388611s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.222158432s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112304688s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221294403s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112457275s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221261978s) [2] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112457275s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.852962494s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.744194031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.852943420s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.744194031s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221140862s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112480164s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.852824211s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.744155884s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.221129417s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112480164s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.852793694s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.744155884s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.853501320s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.744941711s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.852514267s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.743995667s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.853488922s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.744941711s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.852498055s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.743995667s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220822334s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112380981s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220803261s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112380981s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220926285s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112571716s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220913887s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112571716s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.851728439s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.743545532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.851406097s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.743225098s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.851391792s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.743225098s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.851714134s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.743545532s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220938683s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112861633s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220735550s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112678528s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220917702s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112861633s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220707893s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112678528s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.846725464s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.738708496s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.846693993s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.738708496s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1c( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.851108551s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.743186951s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1c( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.851093292s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.743186951s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1d( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.846467018s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.738616943s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220582008s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112731934s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220542908s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112731934s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220621109s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112861633s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.220604897s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112861633s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1f( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.851175308s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.743545532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1f( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.851135254s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.743545532s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.1b( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1d( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.846442223s) [2] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.738616943s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1e( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.849515915s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 98.743171692s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[2.1e( empty local-lis/les=25/26 n=0 ec=25/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=10.849430084s) [0] r=-1 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.743171692s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.1c( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.218344688s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 97.112632751s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=9.217929840s) [0] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.112632751s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=0/0 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[5.10( empty local-lis/les=0/0 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.19( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.1a( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.5( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[4.c( empty local-lis/les=0/0 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.a( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:12:49 compute-0 ceph-mon[74676]: 4.f scrub starts
Feb 02 11:12:49 compute-0 ceph-mon[74676]: 4.f scrub ok
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: 5.a scrub starts
Feb 02 11:12:49 compute-0 ceph-mon[74676]: 5.a scrub ok
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: 2.13 deep-scrub starts
Feb 02 11:12:49 compute-0 ceph-mon[74676]: 2.13 deep-scrub ok
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: mgrmap e16: compute-0.dhyzzj(active, since 3s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:12:49 compute-0 ceph-mon[74676]: osdmap e45: 3 total, 3 up, 3 in
Feb 02 11:12:49 compute-0 sudo[91687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 11:12:49 compute-0 sudo[91687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:49 compute-0 sudo[91687]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 sudo[91735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjviffjsptpbfobejmknuwgiwdwwtlqs ; /usr/bin/python3'
Feb 02 11:12:49 compute-0 sudo[91735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:49 compute-0 sudo[91736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph
Feb 02 11:12:49 compute-0 sudo[91736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:49 compute-0 sudo[91736]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb 02 11:12:49 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Feb 02 11:12:49 compute-0 sudo[91763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:12:49 compute-0 sudo[91763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:49 compute-0 sudo[91763]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 sudo[91788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:12:49 compute-0 sudo[91788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:49 compute-0 sudo[91788]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 python3[91743]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:49 compute-0 sudo[91813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:12:49 compute-0 sudo[91813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:49 compute-0 sudo[91813]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 podman[91814]: 2026-02-02 11:12:49.816601484 +0000 UTC m=+0.040584274 container create d30dce1baa6eef0a9a33a0d7e461aafed4d27fad75abae1c9e7d7e66107ca84a (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:12:49 compute-0 systemd[1]: Started libpod-conmon-d30dce1baa6eef0a9a33a0d7e461aafed4d27fad75abae1c9e7d7e66107ca84a.scope.
Feb 02 11:12:49 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20c7b00b91c8f59e7c1f07e7a6487654a051514f4795a3da8093db277e5d3b8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20c7b00b91c8f59e7c1f07e7a6487654a051514f4795a3da8093db277e5d3b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20c7b00b91c8f59e7c1f07e7a6487654a051514f4795a3da8093db277e5d3b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:49 compute-0 podman[91814]: 2026-02-02 11:12:49.799200908 +0000 UTC m=+0.023183718 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:49 compute-0 podman[91814]: 2026-02-02 11:12:49.896597857 +0000 UTC m=+0.120580667 container init d30dce1baa6eef0a9a33a0d7e461aafed4d27fad75abae1c9e7d7e66107ca84a (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:12:49 compute-0 podman[91814]: 2026-02-02 11:12:49.904857697 +0000 UTC m=+0.128840487 container start d30dce1baa6eef0a9a33a0d7e461aafed4d27fad75abae1c9e7d7e66107ca84a (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:12:49 compute-0 sudo[91880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:12:49 compute-0 podman[91814]: 2026-02-02 11:12:49.910516845 +0000 UTC m=+0.134499635 container attach d30dce1baa6eef0a9a33a0d7e461aafed4d27fad75abae1c9e7d7e66107ca84a (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:49 compute-0 sudo[91880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:49 compute-0 sudo[91880]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:49 compute-0 sudo[91906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:12:49 compute-0 sudo[91906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:49 compute-0 sudo[91906]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:50 compute-0 sudo[91931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 02 11:12:50 compute-0 sudo[91931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[91931]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:50 compute-0 sudo[91975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:12:50 compute-0 sudo[91975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[91975]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 sudo[92000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:12:50 compute-0 sudo[92000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92000]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 sudo[92025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:12:50 compute-0 sudo[92025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92025]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 sudo[92050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:12:50 compute-0 sudo[92050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92050]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14451 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Feb 02 11:12:50 compute-0 sudo[92075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:12:50 compute-0 sudo[92075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92075]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:50 compute-0 sharp_kapitsa[91877]: Option PROMETHEUS_API_HOST updated
Feb 02 11:12:50 compute-0 systemd[1]: libpod-d30dce1baa6eef0a9a33a0d7e461aafed4d27fad75abae1c9e7d7e66107ca84a.scope: Deactivated successfully.
Feb 02 11:12:50 compute-0 podman[91814]: 2026-02-02 11:12:50.287817838 +0000 UTC m=+0.511800638 container died d30dce1baa6eef0a9a33a0d7e461aafed4d27fad75abae1c9e7d7e66107ca84a (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e20c7b00b91c8f59e7c1f07e7a6487654a051514f4795a3da8093db277e5d3b8-merged.mount: Deactivated successfully.
Feb 02 11:12:50 compute-0 podman[91814]: 2026-02-02 11:12:50.325633073 +0000 UTC m=+0.549615863 container remove d30dce1baa6eef0a9a33a0d7e461aafed4d27fad75abae1c9e7d7e66107ca84a (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:50 compute-0 systemd[1]: libpod-conmon-d30dce1baa6eef0a9a33a0d7e461aafed4d27fad75abae1c9e7d7e66107ca84a.scope: Deactivated successfully.
Feb 02 11:12:50 compute-0 sudo[91735]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 sudo[92137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:12:50 compute-0 sudo[92137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92137]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 sudo[92162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:12:50 compute-0 sudo[92162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92162]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 sudo[92187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:50 compute-0 sudo[92187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhawtraabjnmlsglluecbkncehwlpnpq ; /usr/bin/python3'
Feb 02 11:12:50 compute-0 sudo[92233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:50 compute-0 sudo[92187]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:12:50 compute-0 sudo[92238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 11:12:50 compute-0 sudo[92238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92238]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Feb 02 11:12:50 compute-0 sudo[92263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph
Feb 02 11:12:50 compute-0 sudo[92263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Feb 02 11:12:50 compute-0 sudo[92263]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.1b( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.19( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.1c( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[4.c( empty local-lis/les=45/46 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.3( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.5( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.1a( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=27/15 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.1f( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[5.10( empty local-lis/les=45/46 n=0 ec=29/18 lis/c=33/33 les/c/f=34/34/0 sis=45) [1] r=0 lpr=45 pi=[33,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[6.7( empty local-lis/les=45/46 n=0 ec=29/19 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=27/16 lis/c=27/27 les/c/f=28/28/0 sis=45) [1] r=0 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:12:50 compute-0 ceph-mon[74676]: 4.1c deep-scrub starts
Feb 02 11:12:50 compute-0 ceph-mon[74676]: 4.1c deep-scrub ok
Feb 02 11:12:50 compute-0 ceph-mon[74676]: [02/Feb/2026:11:12:48] ENGINE Bus STARTING
Feb 02 11:12:50 compute-0 ceph-mon[74676]: [02/Feb/2026:11:12:49] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:12:50 compute-0 ceph-mon[74676]: [02/Feb/2026:11:12:49] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:12:50 compute-0 ceph-mon[74676]: [02/Feb/2026:11:12:49] ENGINE Bus STARTED
Feb 02 11:12:50 compute-0 ceph-mon[74676]: [02/Feb/2026:11:12:49] ENGINE Client ('192.168.122.100', 34732) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:12:50 compute-0 ceph-mon[74676]: from='client.14445 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:50 compute-0 ceph-mon[74676]: Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:12:50 compute-0 ceph-mon[74676]: Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:12:50 compute-0 ceph-mon[74676]: Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:12:50 compute-0 ceph-mon[74676]: 5.0 scrub starts
Feb 02 11:12:50 compute-0 ceph-mon[74676]: 5.0 scrub ok
Feb 02 11:12:50 compute-0 ceph-mon[74676]: 2.1a scrub starts
Feb 02 11:12:50 compute-0 ceph-mon[74676]: 2.1a scrub ok
Feb 02 11:12:50 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:50 compute-0 ceph-mon[74676]: osdmap e46: 3 total, 3 up, 3 in
Feb 02 11:12:50 compute-0 python3[92237]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:50 compute-0 sudo[92288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:12:50 compute-0 sudo[92288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92288]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 sudo[92314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:12:50 compute-0 sudo[92314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 podman[92312]: 2026-02-02 11:12:50.65689535 +0000 UTC m=+0.041099268 container create 2071a18f460c4374dc6c4e0fbc3ba3a4109c7a93f8b340b27070c4415a9f6b63 (image=quay.io/ceph/ceph:v19, name=funny_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:50 compute-0 sudo[92314]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 systemd[1]: Started libpod-conmon-2071a18f460c4374dc6c4e0fbc3ba3a4109c7a93f8b340b27070c4415a9f6b63.scope.
Feb 02 11:12:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38fd77579c7d2ac5643e8509fb4c06388c49d35020ef583188c98bedf08f10f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38fd77579c7d2ac5643e8509fb4c06388c49d35020ef583188c98bedf08f10f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38fd77579c7d2ac5643e8509fb4c06388c49d35020ef583188c98bedf08f10f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:50 compute-0 sudo[92351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:12:50 compute-0 sudo[92351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92351]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 podman[92312]: 2026-02-02 11:12:50.721575156 +0000 UTC m=+0.105779074 container init 2071a18f460c4374dc6c4e0fbc3ba3a4109c7a93f8b340b27070c4415a9f6b63 (image=quay.io/ceph/ceph:v19, name=funny_jemison, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:12:50 compute-0 podman[92312]: 2026-02-02 11:12:50.726944766 +0000 UTC m=+0.111148684 container start 2071a18f460c4374dc6c4e0fbc3ba3a4109c7a93f8b340b27070c4415a9f6b63 (image=quay.io/ceph/ceph:v19, name=funny_jemison, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb 02 11:12:50 compute-0 podman[92312]: 2026-02-02 11:12:50.731655417 +0000 UTC m=+0.115859335 container attach 2071a18f460c4374dc6c4e0fbc3ba3a4109c7a93f8b340b27070c4415a9f6b63 (image=quay.io/ceph/ceph:v19, name=funny_jemison, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:12:50 compute-0 podman[92312]: 2026-02-02 11:12:50.637375256 +0000 UTC m=+0.021579204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 sudo[92405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:12:50 compute-0 sudo[92405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92405]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 sudo[92433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:12:50 compute-0 sudo[92433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92433]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Feb 02 11:12:50 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Feb 02 11:12:50 compute-0 sudo[92474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 sudo[92474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92474]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:12:50 compute-0 sudo[92499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:12:50 compute-0 sudo[92499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:50 compute-0 sudo[92499]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:51 compute-0 sudo[92524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:12:51 compute-0 sudo[92524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:51 compute-0 sudo[92524]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:51 compute-0 sudo[92549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:12:51 compute-0 sudo[92549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:51 compute-0 sudo[92549]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:51 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14457 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Feb 02 11:12:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 funny_jemison[92376]: Option GRAFANA_API_URL updated
Feb 02 11:12:51 compute-0 systemd[1]: libpod-2071a18f460c4374dc6c4e0fbc3ba3a4109c7a93f8b340b27070c4415a9f6b63.scope: Deactivated successfully.
Feb 02 11:12:51 compute-0 podman[92312]: 2026-02-02 11:12:51.121290123 +0000 UTC m=+0.505494041 container died 2071a18f460c4374dc6c4e0fbc3ba3a4109c7a93f8b340b27070c4415a9f6b63 (image=quay.io/ceph/ceph:v19, name=funny_jemison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:51 compute-0 sudo[92574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:12:51 compute-0 sudo[92574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:51 compute-0 sudo[92574]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e38fd77579c7d2ac5643e8509fb4c06388c49d35020ef583188c98bedf08f10f-merged.mount: Deactivated successfully.
Feb 02 11:12:51 compute-0 podman[92312]: 2026-02-02 11:12:51.15449111 +0000 UTC m=+0.538695028 container remove 2071a18f460c4374dc6c4e0fbc3ba3a4109c7a93f8b340b27070c4415a9f6b63 (image=quay.io/ceph/ceph:v19, name=funny_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:12:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:12:51 compute-0 systemd[1]: libpod-conmon-2071a18f460c4374dc6c4e0fbc3ba3a4109c7a93f8b340b27070c4415a9f6b63.scope: Deactivated successfully.
Feb 02 11:12:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:12:51 compute-0 sudo[92233]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 sudo[92609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:12:51 compute-0 sudo[92609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:51 compute-0 sudo[92609]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:51 compute-0 sudo[92662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:12:51 compute-0 sudo[92662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:51 compute-0 sudo[92662]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:51 compute-0 sudo[92710]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddarterqwnhbxkgnxrldxfybbbdobzob ; /usr/bin/python3'
Feb 02 11:12:51 compute-0 sudo[92710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:51 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.dhyzzj(active, since 4s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:12:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:12:51 compute-0 sudo[92711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:12:51 compute-0 sudo[92711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:51 compute-0 sudo[92711]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 sudo[92738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:12:51 compute-0 sudo[92738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:12:51 compute-0 sudo[92738]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:12:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:12:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:12:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 5292854d-4718-421f-8870-cc5a08882727 (Updating node-exporter deployment (+2 -> 3))
Feb 02 11:12:51 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Feb 02 11:12:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Feb 02 11:12:51 compute-0 python3[92717]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:51 compute-0 podman[92763]: 2026-02-02 11:12:51.47617005 +0000 UTC m=+0.037438677 container create 03f1965e41af44443c96d53e0762ef19f92eb01f0d218dc9698f75315888f5fd (image=quay.io/ceph/ceph:v19, name=upbeat_moore, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:12:51 compute-0 systemd[1]: Started libpod-conmon-03f1965e41af44443c96d53e0762ef19f92eb01f0d218dc9698f75315888f5fd.scope.
Feb 02 11:12:51 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1f4de96b05bb48d62cdf99c55e6c6c482750a324712e02c89e6d6e32471deb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1f4de96b05bb48d62cdf99c55e6c6c482750a324712e02c89e6d6e32471deb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1f4de96b05bb48d62cdf99c55e6c6c482750a324712e02c89e6d6e32471deb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:51 compute-0 podman[92763]: 2026-02-02 11:12:51.539165628 +0000 UTC m=+0.100434275 container init 03f1965e41af44443c96d53e0762ef19f92eb01f0d218dc9698f75315888f5fd (image=quay.io/ceph/ceph:v19, name=upbeat_moore, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:51 compute-0 podman[92763]: 2026-02-02 11:12:51.543949902 +0000 UTC m=+0.105218529 container start 03f1965e41af44443c96d53e0762ef19f92eb01f0d218dc9698f75315888f5fd (image=quay.io/ceph/ceph:v19, name=upbeat_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:12:51 compute-0 podman[92763]: 2026-02-02 11:12:51.547138481 +0000 UTC m=+0.108407098 container attach 03f1965e41af44443c96d53e0762ef19f92eb01f0d218dc9698f75315888f5fd (image=quay.io/ceph/ceph:v19, name=upbeat_moore, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:12:51 compute-0 podman[92763]: 2026-02-02 11:12:51.461669935 +0000 UTC m=+0.022938582 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:51 compute-0 ceph-mon[74676]: 6.18 scrub starts
Feb 02 11:12:51 compute-0 ceph-mon[74676]: 6.18 scrub ok
Feb 02 11:12:51 compute-0 ceph-mon[74676]: Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:51 compute-0 ceph-mon[74676]: Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:51 compute-0 ceph-mon[74676]: Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:12:51 compute-0 ceph-mon[74676]: from='client.14451 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:51 compute-0 ceph-mon[74676]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:12:51 compute-0 ceph-mon[74676]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:12:51 compute-0 ceph-mon[74676]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:12:51 compute-0 ceph-mon[74676]: pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:51 compute-0 ceph-mon[74676]: 6.1f scrub starts
Feb 02 11:12:51 compute-0 ceph-mon[74676]: 6.1f scrub ok
Feb 02 11:12:51 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: mgrmap e17: compute-0.dhyzzj(active, since 4s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:51 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-mon[74676]: from='mgr.24178 192.168.122.100:0/3808112507' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:12:51 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Feb 02 11:12:51 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Feb 02 11:12:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Feb 02 11:12:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1373022539' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v8: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:12:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1373022539' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Feb 02 11:12:52 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  1: '-n'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  2: 'mgr.compute-0.dhyzzj'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  3: '-f'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  4: '--setuser'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  5: 'ceph'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  6: '--setgroup'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  7: 'ceph'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  8: '--default-log-to-file=false'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  9: '--default-log-to-journald=true'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  10: '--default-log-to-stderr=false'
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr respawn  exe_path /proc/self/exe
Feb 02 11:12:52 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb 02 11:12:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.dhyzzj(active, since 6s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:52 compute-0 ceph-mon[74676]: 5.e scrub starts
Feb 02 11:12:52 compute-0 ceph-mon[74676]: Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:12:52 compute-0 ceph-mon[74676]: Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:12:52 compute-0 ceph-mon[74676]: 7.1c deep-scrub starts
Feb 02 11:12:52 compute-0 ceph-mon[74676]: 7.1c deep-scrub ok
Feb 02 11:12:52 compute-0 ceph-mon[74676]: 5.e scrub ok
Feb 02 11:12:52 compute-0 ceph-mon[74676]: Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:12:52 compute-0 ceph-mon[74676]: from='client.14457 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:12:52 compute-0 ceph-mon[74676]: Deploying daemon node-exporter.compute-1 on compute-1
Feb 02 11:12:52 compute-0 ceph-mon[74676]: 3.8 scrub starts
Feb 02 11:12:52 compute-0 ceph-mon[74676]: 3.8 scrub ok
Feb 02 11:12:52 compute-0 ceph-mon[74676]: 7.12 scrub starts
Feb 02 11:12:52 compute-0 ceph-mon[74676]: 7.12 scrub ok
Feb 02 11:12:52 compute-0 ceph-mon[74676]: 6.c scrub starts
Feb 02 11:12:52 compute-0 ceph-mon[74676]: 6.c scrub ok
Feb 02 11:12:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1373022539' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Feb 02 11:12:52 compute-0 systemd[1]: libpod-03f1965e41af44443c96d53e0762ef19f92eb01f0d218dc9698f75315888f5fd.scope: Deactivated successfully.
Feb 02 11:12:52 compute-0 podman[92763]: 2026-02-02 11:12:52.654253706 +0000 UTC m=+1.215522333 container died 03f1965e41af44443c96d53e0762ef19f92eb01f0d218dc9698f75315888f5fd (image=quay.io/ceph/ceph:v19, name=upbeat_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e1f4de96b05bb48d62cdf99c55e6c6c482750a324712e02c89e6d6e32471deb-merged.mount: Deactivated successfully.
Feb 02 11:12:52 compute-0 sshd-session[91042]: Connection closed by 192.168.122.100 port 38492
Feb 02 11:12:52 compute-0 sshd-session[91028]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:12:52 compute-0 podman[92763]: 2026-02-02 11:12:52.68915941 +0000 UTC m=+1.250428047 container remove 03f1965e41af44443c96d53e0762ef19f92eb01f0d218dc9698f75315888f5fd (image=quay.io/ceph/ceph:v19, name=upbeat_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:52 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Feb 02 11:12:52 compute-0 systemd[1]: session-34.scope: Consumed 4.045s CPU time.
Feb 02 11:12:52 compute-0 systemd-logind[793]: Session 34 logged out. Waiting for processes to exit.
Feb 02 11:12:52 compute-0 systemd-logind[793]: Removed session 34.
Feb 02 11:12:52 compute-0 systemd[1]: libpod-conmon-03f1965e41af44443c96d53e0762ef19f92eb01f0d218dc9698f75315888f5fd.scope: Deactivated successfully.
Feb 02 11:12:52 compute-0 sudo[92710]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ignoring --setuser ceph since I am not root
Feb 02 11:12:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ignoring --setgroup ceph since I am not root
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: pidfile_write: ignore empty --pid-file
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'alerts'
Feb 02 11:12:52 compute-0 sudo[92858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igczfwbbtogltmlzbmlcsrgvnxngyilx ; /usr/bin/python3'
Feb 02 11:12:52 compute-0 sudo[92858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'balancer'
Feb 02 11:12:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:52.843+0000 7f0507e95140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:12:52 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'cephadm'
Feb 02 11:12:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:52.927+0000 7f0507e95140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:12:52 compute-0 python3[92860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:53 compute-0 podman[92861]: 2026-02-02 11:12:53.011216191 +0000 UTC m=+0.041334765 container create ce73cce324ec72c780b53a0112b9518905dec92888c67ea5cb916bfb550d4c58 (image=quay.io/ceph/ceph:v19, name=great_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb 02 11:12:53 compute-0 systemd[1]: Started libpod-conmon-ce73cce324ec72c780b53a0112b9518905dec92888c67ea5cb916bfb550d4c58.scope.
Feb 02 11:12:53 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d71036918f10fbe75844c8c3204fa77887fd4888c8355a9f634cc9a16d1e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d71036918f10fbe75844c8c3204fa77887fd4888c8355a9f634cc9a16d1e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d71036918f10fbe75844c8c3204fa77887fd4888c8355a9f634cc9a16d1e8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:53 compute-0 podman[92861]: 2026-02-02 11:12:53.085458013 +0000 UTC m=+0.115576587 container init ce73cce324ec72c780b53a0112b9518905dec92888c67ea5cb916bfb550d4c58 (image=quay.io/ceph/ceph:v19, name=great_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 11:12:53 compute-0 podman[92861]: 2026-02-02 11:12:52.99507257 +0000 UTC m=+0.025191164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:53 compute-0 podman[92861]: 2026-02-02 11:12:53.093175829 +0000 UTC m=+0.123294393 container start ce73cce324ec72c780b53a0112b9518905dec92888c67ea5cb916bfb550d4c58 (image=quay.io/ceph/ceph:v19, name=great_bhaskara, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:12:53 compute-0 podman[92861]: 2026-02-02 11:12:53.097246672 +0000 UTC m=+0.127365266 container attach ce73cce324ec72c780b53a0112b9518905dec92888c67ea5cb916bfb550d4c58 (image=quay.io/ceph/ceph:v19, name=great_bhaskara, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:12:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Feb 02 11:12:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2579500551' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Feb 02 11:12:53 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb 02 11:12:53 compute-0 ceph-mon[74676]: 5.4 deep-scrub starts
Feb 02 11:12:53 compute-0 ceph-mon[74676]: 5.4 deep-scrub ok
Feb 02 11:12:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1373022539' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Feb 02 11:12:53 compute-0 ceph-mon[74676]: 7.17 scrub starts
Feb 02 11:12:53 compute-0 ceph-mon[74676]: 7.17 scrub ok
Feb 02 11:12:53 compute-0 ceph-mon[74676]: mgrmap e18: compute-0.dhyzzj(active, since 6s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:53 compute-0 ceph-mon[74676]: 6.6 scrub starts
Feb 02 11:12:53 compute-0 ceph-mon[74676]: 6.6 scrub ok
Feb 02 11:12:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2579500551' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Feb 02 11:12:53 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb 02 11:12:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2579500551' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Feb 02 11:12:53 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.dhyzzj(active, since 7s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:53 compute-0 systemd[1]: libpod-ce73cce324ec72c780b53a0112b9518905dec92888c67ea5cb916bfb550d4c58.scope: Deactivated successfully.
Feb 02 11:12:53 compute-0 podman[92861]: 2026-02-02 11:12:53.719522843 +0000 UTC m=+0.749641417 container died ce73cce324ec72c780b53a0112b9518905dec92888c67ea5cb916bfb550d4c58 (image=quay.io/ceph/ceph:v19, name=great_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:12:53 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'crash'
Feb 02 11:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8d71036918f10fbe75844c8c3204fa77887fd4888c8355a9f634cc9a16d1e8-merged.mount: Deactivated successfully.
Feb 02 11:12:53 compute-0 podman[92861]: 2026-02-02 11:12:53.756491175 +0000 UTC m=+0.786609749 container remove ce73cce324ec72c780b53a0112b9518905dec92888c67ea5cb916bfb550d4c58 (image=quay.io/ceph/ceph:v19, name=great_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:12:53 compute-0 systemd[1]: libpod-conmon-ce73cce324ec72c780b53a0112b9518905dec92888c67ea5cb916bfb550d4c58.scope: Deactivated successfully.
Feb 02 11:12:53 compute-0 sudo[92858]: pam_unix(sudo:session): session closed for user root
Feb 02 11:12:53 compute-0 ceph-mgr[74969]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:12:53 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'dashboard'
Feb 02 11:12:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:53.813+0000 7f0507e95140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:12:54 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'devicehealth'
Feb 02 11:12:54 compute-0 python3[93001]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 11:12:54 compute-0 ceph-mgr[74969]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:12:54 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'diskprediction_local'
Feb 02 11:12:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:54.494+0000 7f0507e95140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:12:54 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Feb 02 11:12:54 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Feb 02 11:12:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 02 11:12:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 02 11:12:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   from numpy import show_config as show_numpy_config
Feb 02 11:12:54 compute-0 ceph-mgr[74969]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:12:54 compute-0 ceph-mon[74676]: 3.1d deep-scrub starts
Feb 02 11:12:54 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'influx'
Feb 02 11:12:54 compute-0 ceph-mon[74676]: 3.1d deep-scrub ok
Feb 02 11:12:54 compute-0 ceph-mon[74676]: 7.15 scrub starts
Feb 02 11:12:54 compute-0 ceph-mon[74676]: 7.15 scrub ok
Feb 02 11:12:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2579500551' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Feb 02 11:12:54 compute-0 ceph-mon[74676]: mgrmap e19: compute-0.dhyzzj(active, since 7s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:54 compute-0 ceph-mon[74676]: 6.4 scrub starts
Feb 02 11:12:54 compute-0 ceph-mon[74676]: 6.4 scrub ok
Feb 02 11:12:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:54.673+0000 7f0507e95140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:12:54 compute-0 python3[93072]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770030774.2430878-37514-271396554085062/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:12:54 compute-0 ceph-mgr[74969]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:12:54 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'insights'
Feb 02 11:12:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:54.750+0000 7f0507e95140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:12:54 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'iostat'
Feb 02 11:12:54 compute-0 ceph-mgr[74969]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:12:54 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'k8sevents'
Feb 02 11:12:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:54.893+0000 7f0507e95140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:12:55 compute-0 sudo[93120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzkhksmdqkqikncyqakvlrlkitzgbqjz ; /usr/bin/python3'
Feb 02 11:12:55 compute-0 sudo[93120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:12:55 compute-0 python3[93122]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:12:55 compute-0 podman[93123]: 2026-02-02 11:12:55.182963344 +0000 UTC m=+0.039912755 container create b4b2180d836c951b4e59d47c0f4896343f141829e518a25b2d797e5768476187 (image=quay.io/ceph/ceph:v19, name=condescending_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:12:55 compute-0 systemd[1]: Started libpod-conmon-b4b2180d836c951b4e59d47c0f4896343f141829e518a25b2d797e5768476187.scope.
Feb 02 11:12:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14300cf258d7bb0c593d451582bc5ff76df09db8359bbc62b805d3eca72e8986/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14300cf258d7bb0c593d451582bc5ff76df09db8359bbc62b805d3eca72e8986/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14300cf258d7bb0c593d451582bc5ff76df09db8359bbc62b805d3eca72e8986/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:12:55 compute-0 podman[93123]: 2026-02-02 11:12:55.251040575 +0000 UTC m=+0.107989986 container init b4b2180d836c951b4e59d47c0f4896343f141829e518a25b2d797e5768476187 (image=quay.io/ceph/ceph:v19, name=condescending_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:12:55 compute-0 podman[93123]: 2026-02-02 11:12:55.255056297 +0000 UTC m=+0.112005708 container start b4b2180d836c951b4e59d47c0f4896343f141829e518a25b2d797e5768476187 (image=quay.io/ceph/ceph:v19, name=condescending_colden, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:12:55 compute-0 podman[93123]: 2026-02-02 11:12:55.258556004 +0000 UTC m=+0.115505445 container attach b4b2180d836c951b4e59d47c0f4896343f141829e518a25b2d797e5768476187 (image=quay.io/ceph/ceph:v19, name=condescending_colden, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:12:55 compute-0 podman[93123]: 2026-02-02 11:12:55.162488513 +0000 UTC m=+0.019437944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:12:55 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'localpool'
Feb 02 11:12:55 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mds_autoscaler'
Feb 02 11:12:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:12:55 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mirroring'
Feb 02 11:12:55 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Feb 02 11:12:55 compute-0 ceph-mon[74676]: 7.0 scrub starts
Feb 02 11:12:55 compute-0 ceph-mon[74676]: 7.0 scrub ok
Feb 02 11:12:55 compute-0 ceph-mon[74676]: 5.1a scrub starts
Feb 02 11:12:55 compute-0 ceph-mon[74676]: 5.1a scrub ok
Feb 02 11:12:55 compute-0 ceph-mon[74676]: 6.0 scrub starts
Feb 02 11:12:55 compute-0 ceph-mon[74676]: 6.0 scrub ok
Feb 02 11:12:55 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Feb 02 11:12:55 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'nfs'
Feb 02 11:12:55 compute-0 ceph-mgr[74969]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:12:55 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'orchestrator'
Feb 02 11:12:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:55.937+0000 7f0507e95140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:56.171+0000 7f0507e95140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_perf_query'
Feb 02 11:12:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:56.250+0000 7f0507e95140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_support'
Feb 02 11:12:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:56.324+0000 7f0507e95140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'pg_autoscaler'
Feb 02 11:12:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:56.410+0000 7f0507e95140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'progress'
Feb 02 11:12:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:56.486+0000 7f0507e95140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'prometheus'
Feb 02 11:12:56 compute-0 ceph-mon[74676]: 3.1b scrub starts
Feb 02 11:12:56 compute-0 ceph-mon[74676]: 3.1b scrub ok
Feb 02 11:12:56 compute-0 ceph-mon[74676]: 7.7 deep-scrub starts
Feb 02 11:12:56 compute-0 ceph-mon[74676]: 7.7 deep-scrub ok
Feb 02 11:12:56 compute-0 ceph-mon[74676]: 6.f deep-scrub starts
Feb 02 11:12:56 compute-0 ceph-mon[74676]: 6.f deep-scrub ok
Feb 02 11:12:56 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb 02 11:12:56 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb 02 11:12:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:56.839+0000 7f0507e95140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rbd_support'
Feb 02 11:12:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:56.949+0000 7f0507e95140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:12:56 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'restful'
Feb 02 11:12:57 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rgw'
Feb 02 11:12:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:57.407+0000 7f0507e95140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:12:57 compute-0 ceph-mgr[74969]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:12:57 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rook'
Feb 02 11:12:57 compute-0 ceph-mon[74676]: 3.1a scrub starts
Feb 02 11:12:57 compute-0 ceph-mon[74676]: 3.1a scrub ok
Feb 02 11:12:57 compute-0 ceph-mon[74676]: 6.9 scrub starts
Feb 02 11:12:57 compute-0 ceph-mon[74676]: 6.9 scrub ok
Feb 02 11:12:57 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.d scrub starts
Feb 02 11:12:57 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.d scrub ok
Feb 02 11:12:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:57.993+0000 7f0507e95140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:12:57 compute-0 ceph-mgr[74969]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:12:57 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'selftest'
Feb 02 11:12:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:58.069+0000 7f0507e95140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'snap_schedule'
Feb 02 11:12:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:58.154+0000 7f0507e95140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'stats'
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'status'
Feb 02 11:12:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:58.313+0000 7f0507e95140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telegraf'
Feb 02 11:12:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:58.389+0000 7f0507e95140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telemetry'
Feb 02 11:12:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:58.557+0000 7f0507e95140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'test_orchestrator'
Feb 02 11:12:58 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.iybsjv restarted
Feb 02 11:12:58 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.iybsjv started
Feb 02 11:12:58 compute-0 ceph-mon[74676]: 7.1 scrub starts
Feb 02 11:12:58 compute-0 ceph-mon[74676]: 7.1 scrub ok
Feb 02 11:12:58 compute-0 ceph-mon[74676]: 3.9 scrub starts
Feb 02 11:12:58 compute-0 ceph-mon[74676]: 3.9 scrub ok
Feb 02 11:12:58 compute-0 ceph-mon[74676]: 6.b scrub starts
Feb 02 11:12:58 compute-0 ceph-mon[74676]: 6.b scrub ok
Feb 02 11:12:58 compute-0 ceph-mon[74676]: 7.d scrub starts
Feb 02 11:12:58 compute-0 ceph-mon[74676]: 7.d scrub ok
Feb 02 11:12:58 compute-0 ceph-mon[74676]: Standby manager daemon compute-1.iybsjv restarted
Feb 02 11:12:58 compute-0 ceph-mon[74676]: Standby manager daemon compute-1.iybsjv started
Feb 02 11:12:58 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb 02 11:12:58 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb 02 11:12:58 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.dhyzzj(active, since 12s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:58.802+0000 7f0507e95140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:12:58 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'volumes'
Feb 02 11:12:59 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zebspe restarted
Feb 02 11:12:59 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zebspe started
Feb 02 11:12:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:59.101+0000 7f0507e95140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'zabbix'
Feb 02 11:12:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:59.183+0000 7f0507e95140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:12:59 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dhyzzj restarted
Feb 02 11:12:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Feb 02 11:12:59 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dhyzzj
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: ms_deliver_dispatch: unhandled message 0x559946bff860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  1: '-n'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  2: 'mgr.compute-0.dhyzzj'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  3: '-f'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  4: '--setuser'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  5: 'ceph'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  6: '--setgroup'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  7: 'ceph'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  8: '--default-log-to-file=false'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  9: '--default-log-to-journald=true'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  10: '--default-log-to-stderr=false'
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr respawn  exe_path /proc/self/exe
Feb 02 11:12:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Feb 02 11:12:59 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Feb 02 11:12:59 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.dhyzzj(active, starting, since 0.0295421s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ignoring --setuser ceph since I am not root
Feb 02 11:12:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ignoring --setgroup ceph since I am not root
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: pidfile_write: ignore empty --pid-file
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'alerts'
Feb 02 11:12:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:59.378+0000 7f486ef53140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'balancer'
Feb 02 11:12:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:12:59.458+0000 7f486ef53140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:12:59 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'cephadm'
Feb 02 11:12:59 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Feb 02 11:12:59 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Feb 02 11:12:59 compute-0 ceph-mon[74676]: 3.0 scrub starts
Feb 02 11:12:59 compute-0 ceph-mon[74676]: 6.14 deep-scrub starts
Feb 02 11:12:59 compute-0 ceph-mon[74676]: 3.0 scrub ok
Feb 02 11:12:59 compute-0 ceph-mon[74676]: 6.14 deep-scrub ok
Feb 02 11:12:59 compute-0 ceph-mon[74676]: 7.c scrub starts
Feb 02 11:12:59 compute-0 ceph-mon[74676]: 7.c scrub ok
Feb 02 11:12:59 compute-0 ceph-mon[74676]: mgrmap e20: compute-0.dhyzzj(active, since 12s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:59 compute-0 ceph-mon[74676]: Standby manager daemon compute-2.zebspe restarted
Feb 02 11:12:59 compute-0 ceph-mon[74676]: Standby manager daemon compute-2.zebspe started
Feb 02 11:12:59 compute-0 ceph-mon[74676]: Active manager daemon compute-0.dhyzzj restarted
Feb 02 11:12:59 compute-0 ceph-mon[74676]: Activating manager daemon compute-0.dhyzzj
Feb 02 11:12:59 compute-0 ceph-mon[74676]: osdmap e47: 3 total, 3 up, 3 in
Feb 02 11:12:59 compute-0 ceph-mon[74676]: mgrmap e21: compute-0.dhyzzj(active, starting, since 0.0295421s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:12:59 compute-0 ceph-mon[74676]: 7.19 scrub starts
Feb 02 11:12:59 compute-0 ceph-mon[74676]: 7.19 scrub ok
Feb 02 11:13:00 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'crash'
Feb 02 11:13:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:00.276+0000 7f486ef53140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:13:00 compute-0 ceph-mgr[74969]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:13:00 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'dashboard'
Feb 02 11:13:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:00 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb 02 11:13:00 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb 02 11:13:00 compute-0 ceph-mon[74676]: 6.16 scrub starts
Feb 02 11:13:00 compute-0 ceph-mon[74676]: 6.16 scrub ok
Feb 02 11:13:00 compute-0 ceph-mon[74676]: 5.d scrub starts
Feb 02 11:13:00 compute-0 ceph-mon[74676]: 5.d scrub ok
Feb 02 11:13:00 compute-0 ceph-mon[74676]: 7.1a scrub starts
Feb 02 11:13:00 compute-0 ceph-mon[74676]: 7.1a scrub ok
Feb 02 11:13:00 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'devicehealth'
Feb 02 11:13:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:00.931+0000 7f486ef53140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:13:00 compute-0 ceph-mgr[74969]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:13:00 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'diskprediction_local'
Feb 02 11:13:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 02 11:13:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 02 11:13:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   from numpy import show_config as show_numpy_config
Feb 02 11:13:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:01.092+0000 7f486ef53140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:13:01 compute-0 ceph-mgr[74969]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:13:01 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'influx'
Feb 02 11:13:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:01.170+0000 7f486ef53140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:13:01 compute-0 ceph-mgr[74969]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:13:01 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'insights'
Feb 02 11:13:01 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'iostat'
Feb 02 11:13:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:01.310+0000 7f486ef53140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:13:01 compute-0 ceph-mgr[74969]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:13:01 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'k8sevents'
Feb 02 11:13:01 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Feb 02 11:13:01 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Feb 02 11:13:01 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'localpool'
Feb 02 11:13:01 compute-0 ceph-mon[74676]: 6.11 scrub starts
Feb 02 11:13:01 compute-0 ceph-mon[74676]: 6.11 scrub ok
Feb 02 11:13:01 compute-0 ceph-mon[74676]: 5.b scrub starts
Feb 02 11:13:01 compute-0 ceph-mon[74676]: 5.b scrub ok
Feb 02 11:13:01 compute-0 ceph-mon[74676]: 5.18 scrub starts
Feb 02 11:13:01 compute-0 ceph-mon[74676]: 5.18 scrub ok
Feb 02 11:13:01 compute-0 ceph-mon[74676]: 3.e scrub starts
Feb 02 11:13:01 compute-0 ceph-mon[74676]: 3.e scrub ok
Feb 02 11:13:01 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mds_autoscaler'
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mirroring'
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'nfs'
Feb 02 11:13:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:02.361+0000 7f486ef53140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'orchestrator'
Feb 02 11:13:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:02.586+0000 7f486ef53140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_perf_query'
Feb 02 11:13:02 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Feb 02 11:13:02 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Feb 02 11:13:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:02.668+0000 7f486ef53140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_support'
Feb 02 11:13:02 compute-0 ceph-mon[74676]: 6.10 scrub starts
Feb 02 11:13:02 compute-0 ceph-mon[74676]: 6.10 scrub ok
Feb 02 11:13:02 compute-0 ceph-mon[74676]: 3.1c scrub starts
Feb 02 11:13:02 compute-0 ceph-mon[74676]: 3.1c scrub ok
Feb 02 11:13:02 compute-0 ceph-mon[74676]: 5.12 scrub starts
Feb 02 11:13:02 compute-0 ceph-mon[74676]: 5.12 scrub ok
Feb 02 11:13:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:02.740+0000 7f486ef53140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'pg_autoscaler'
Feb 02 11:13:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:02.827+0000 7f486ef53140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'progress'
Feb 02 11:13:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:02.899+0000 7f486ef53140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:13:02 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'prometheus'
Feb 02 11:13:02 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Feb 02 11:13:02 compute-0 systemd[76004]: Activating special unit Exit the Session...
Feb 02 11:13:02 compute-0 systemd[76004]: Stopped target Main User Target.
Feb 02 11:13:02 compute-0 systemd[76004]: Stopped target Basic System.
Feb 02 11:13:02 compute-0 systemd[76004]: Stopped target Paths.
Feb 02 11:13:02 compute-0 systemd[76004]: Stopped target Sockets.
Feb 02 11:13:02 compute-0 systemd[76004]: Stopped target Timers.
Feb 02 11:13:02 compute-0 systemd[76004]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb 02 11:13:02 compute-0 systemd[76004]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 02 11:13:02 compute-0 systemd[76004]: Closed D-Bus User Message Bus Socket.
Feb 02 11:13:02 compute-0 systemd[76004]: Stopped Create User's Volatile Files and Directories.
Feb 02 11:13:02 compute-0 systemd[76004]: Removed slice User Application Slice.
Feb 02 11:13:02 compute-0 systemd[76004]: Reached target Shutdown.
Feb 02 11:13:02 compute-0 systemd[76004]: Finished Exit the Session.
Feb 02 11:13:02 compute-0 systemd[76004]: Reached target Exit the Session.
Feb 02 11:13:02 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Feb 02 11:13:02 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Feb 02 11:13:02 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb 02 11:13:02 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb 02 11:13:02 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb 02 11:13:02 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb 02 11:13:02 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Feb 02 11:13:02 compute-0 systemd[1]: user-42477.slice: Consumed 27.391s CPU time.
Feb 02 11:13:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:03.274+0000 7f486ef53140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:13:03 compute-0 ceph-mgr[74969]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:13:03 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rbd_support'
Feb 02 11:13:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:03.379+0000 7f486ef53140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:13:03 compute-0 ceph-mgr[74969]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:13:03 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'restful'
Feb 02 11:13:03 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb 02 11:13:03 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb 02 11:13:03 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rgw'
Feb 02 11:13:03 compute-0 ceph-mon[74676]: 6.13 scrub starts
Feb 02 11:13:03 compute-0 ceph-mon[74676]: 6.13 scrub ok
Feb 02 11:13:03 compute-0 ceph-mon[74676]: 4.1b scrub starts
Feb 02 11:13:03 compute-0 ceph-mon[74676]: 4.1b scrub ok
Feb 02 11:13:03 compute-0 ceph-mon[74676]: 6.12 scrub starts
Feb 02 11:13:03 compute-0 ceph-mon[74676]: 6.12 scrub ok
Feb 02 11:13:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:03.849+0000 7f486ef53140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:13:03 compute-0 ceph-mgr[74969]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:13:03 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rook'
Feb 02 11:13:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:04.466+0000 7f486ef53140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'selftest'
Feb 02 11:13:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:04.549+0000 7f486ef53140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'snap_schedule'
Feb 02 11:13:04 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Feb 02 11:13:04 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Feb 02 11:13:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:04.637+0000 7f486ef53140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'stats'
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'status'
Feb 02 11:13:04 compute-0 ceph-mon[74676]: 6.1d scrub starts
Feb 02 11:13:04 compute-0 ceph-mon[74676]: 6.1d scrub ok
Feb 02 11:13:04 compute-0 ceph-mon[74676]: 6.19 scrub starts
Feb 02 11:13:04 compute-0 ceph-mon[74676]: 6.19 scrub ok
Feb 02 11:13:04 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.iybsjv restarted
Feb 02 11:13:04 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.iybsjv started
Feb 02 11:13:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:04.800+0000 7f486ef53140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telegraf'
Feb 02 11:13:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:04.892+0000 7f486ef53140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:13:04 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telemetry'
Feb 02 11:13:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:05.067+0000 7f486ef53140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'test_orchestrator'
Feb 02 11:13:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:05.303+0000 7f486ef53140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'volumes'
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zebspe restarted
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zebspe started
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:05.603+0000 7f486ef53140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'zabbix'
Feb 02 11:13:05 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb 02 11:13:05 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb 02 11:13:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:13:05.676+0000 7f486ef53140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dhyzzj restarted
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: ms_deliver_dispatch: unhandled message 0x55d048449860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dhyzzj
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr handle_mgr_map Activating!
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr handle_mgr_map I am now activating
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.dhyzzj(active, starting, since 0.0304147s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e1 all = 1
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: balancer
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [balancer INFO root] Starting
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Manager daemon compute-0.dhyzzj is now available
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:13:05
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Feb 02 11:13:05 compute-0 ceph-mon[74676]: 6.17 scrub starts
Feb 02 11:13:05 compute-0 ceph-mon[74676]: 5.19 scrub starts
Feb 02 11:13:05 compute-0 ceph-mon[74676]: 6.17 scrub ok
Feb 02 11:13:05 compute-0 ceph-mon[74676]: 5.19 scrub ok
Feb 02 11:13:05 compute-0 ceph-mon[74676]: Standby manager daemon compute-1.iybsjv restarted
Feb 02 11:13:05 compute-0 ceph-mon[74676]: Standby manager daemon compute-1.iybsjv started
Feb 02 11:13:05 compute-0 ceph-mon[74676]: Standby manager daemon compute-2.zebspe restarted
Feb 02 11:13:05 compute-0 ceph-mon[74676]: Standby manager daemon compute-2.zebspe started
Feb 02 11:13:05 compute-0 ceph-mon[74676]: 5.1b scrub starts
Feb 02 11:13:05 compute-0 ceph-mon[74676]: 5.1b scrub ok
Feb 02 11:13:05 compute-0 ceph-mon[74676]: Active manager daemon compute-0.dhyzzj restarted
Feb 02 11:13:05 compute-0 ceph-mon[74676]: Activating manager daemon compute-0.dhyzzj
Feb 02 11:13:05 compute-0 ceph-mon[74676]: 4.1d scrub starts
Feb 02 11:13:05 compute-0 ceph-mon[74676]: 4.1d scrub ok
Feb 02 11:13:05 compute-0 ceph-mon[74676]: osdmap e48: 3 total, 3 up, 3 in
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mgrmap e22: compute-0.dhyzzj(active, starting, since 0.0304147s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mon[74676]: Manager daemon compute-0.dhyzzj is now available
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: cephadm
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: crash
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: dashboard
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [dashboard INFO access_control] Loading user roles DB version=2
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [dashboard INFO sso] Loading SSO DB version=1
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [dashboard INFO root] Configured CherryPy, starting engine...
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: devicehealth
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Starting
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: iostat
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: nfs
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: orchestrator
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: pg_autoscaler
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: progress
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [progress INFO root] Loading...
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f47f0a6b190>, <progress.module.GhostEvent object at 0x7f47f0a6b1c0>, <progress.module.GhostEvent object at 0x7f47f0a6b160>, <progress.module.GhostEvent object at 0x7f47f0a6b1f0>, <progress.module.GhostEvent object at 0x7f47f0a6b220>, <progress.module.GhostEvent object at 0x7f47f0a6b250>, <progress.module.GhostEvent object at 0x7f47f0a6b280>, <progress.module.GhostEvent object at 0x7f47f0a6b2b0>, <progress.module.GhostEvent object at 0x7f47f0a6b2e0>, <progress.module.GhostEvent object at 0x7f47f0a6b310>, <progress.module.GhostEvent object at 0x7f47f0a6b340>, <progress.module.GhostEvent object at 0x7f47f0a6b370>] historic events
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [progress INFO root] Loaded OSDMap, ready.
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] recovery thread starting
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] starting setup
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: rbd_support
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: restful
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: status
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [restful INFO root] server_addr: :: server_port: 8003
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [restful WARNING root] server not running: no certificate configured
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: telemetry
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] PerfHandler: starting
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: vms, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: volumes, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: backups, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: images, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TaskHandler: starting
Feb 02 11:13:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"} v 0)
Feb 02 11:13:05 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"}]: dispatch
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: volumes
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 02 11:13:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] setup complete
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Feb 02 11:13:06 compute-0 sshd-session[93311]: Accepted publickey for ceph-admin from 192.168.122.100 port 57808 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:13:06 compute-0 systemd-logind[793]: New session 35 of user ceph-admin.
Feb 02 11:13:06 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Feb 02 11:13:06 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 02 11:13:06 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 02 11:13:06 compute-0 systemd[1]: Starting User Manager for UID 42477...
Feb 02 11:13:06 compute-0 systemd[93326]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.module] Engine started.
Feb 02 11:13:06 compute-0 systemd[93326]: Queued start job for default target Main User Target.
Feb 02 11:13:06 compute-0 systemd[93326]: Created slice User Application Slice.
Feb 02 11:13:06 compute-0 systemd[93326]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 02 11:13:06 compute-0 systemd[93326]: Started Daily Cleanup of User's Temporary Directories.
Feb 02 11:13:06 compute-0 systemd[93326]: Reached target Paths.
Feb 02 11:13:06 compute-0 systemd[93326]: Reached target Timers.
Feb 02 11:13:06 compute-0 systemd[93326]: Starting D-Bus User Message Bus Socket...
Feb 02 11:13:06 compute-0 systemd[93326]: Starting Create User's Volatile Files and Directories...
Feb 02 11:13:06 compute-0 systemd[93326]: Finished Create User's Volatile Files and Directories.
Feb 02 11:13:06 compute-0 systemd[93326]: Listening on D-Bus User Message Bus Socket.
Feb 02 11:13:06 compute-0 systemd[93326]: Reached target Sockets.
Feb 02 11:13:06 compute-0 systemd[93326]: Reached target Basic System.
Feb 02 11:13:06 compute-0 systemd[93326]: Reached target Main User Target.
Feb 02 11:13:06 compute-0 systemd[93326]: Startup finished in 105ms.
Feb 02 11:13:06 compute-0 systemd[1]: Started User Manager for UID 42477.
Feb 02 11:13:06 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Feb 02 11:13:06 compute-0 sshd-session[93311]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:13:06 compute-0 sudo[93343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:06 compute-0 sudo[93343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:06 compute-0 sudo[93343]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:06 compute-0 sudo[93368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:13:06 compute-0 sudo[93368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:06 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb 02 11:13:06 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14484 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb 02 11:13:06 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.dhyzzj(active, since 1.05398s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:13:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Feb 02 11:13:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Feb 02 11:13:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Feb 02 11:13:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Feb 02 11:13:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Feb 02 11:13:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Feb 02 11:13:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Feb 02 11:13:06 compute-0 ceph-mon[74676]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 02 11:13:06 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb 02 11:13:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0[74672]: 2026-02-02T11:13:06.742+0000 7f1b8fccb640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:13:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb 02 11:13:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e2 new map
Feb 02 11:13:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-02-02T11:13:06:743064+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T11:13:06.742944+0000
                                           modified        2026-02-02T11:13:06.742944+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Feb 02 11:13:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Feb 02 11:13:06 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Feb 02 11:13:06 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 02 11:13:06 compute-0 ceph-mon[74676]: 3.1f scrub starts
Feb 02 11:13:06 compute-0 ceph-mon[74676]: 3.1f scrub ok
Feb 02 11:13:06 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"}]: dispatch
Feb 02 11:13:06 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"}]: dispatch
Feb 02 11:13:06 compute-0 ceph-mon[74676]: 4.1a scrub starts
Feb 02 11:13:06 compute-0 ceph-mon[74676]: 4.1a scrub ok
Feb 02 11:13:06 compute-0 ceph-mon[74676]: mgrmap e23: compute-0.dhyzzj(active, since 1.05398s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:13:06 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Feb 02 11:13:06 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Feb 02 11:13:06 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Feb 02 11:13:06 compute-0 ceph-mon[74676]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 02 11:13:06 compute-0 ceph-mon[74676]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb 02 11:13:06 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb 02 11:13:06 compute-0 ceph-mon[74676]: osdmap e49: 3 total, 3 up, 3 in
Feb 02 11:13:06 compute-0 ceph-mon[74676]: fsmap cephfs:0
Feb 02 11:13:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:06 compute-0 ceph-mgr[74969]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb 02 11:13:06 compute-0 systemd[1]: libpod-b4b2180d836c951b4e59d47c0f4896343f141829e518a25b2d797e5768476187.scope: Deactivated successfully.
Feb 02 11:13:06 compute-0 conmon[93139]: conmon b4b2180d836c951b4e59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4b2180d836c951b4e59d47c0f4896343f141829e518a25b2d797e5768476187.scope/container/memory.events
Feb 02 11:13:06 compute-0 podman[93123]: 2026-02-02 11:13:06.813452318 +0000 UTC m=+11.670401729 container died b4b2180d836c951b4e59d47c0f4896343f141829e518a25b2d797e5768476187 (image=quay.io/ceph/ceph:v19, name=condescending_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-14300cf258d7bb0c593d451582bc5ff76df09db8359bbc62b805d3eca72e8986-merged.mount: Deactivated successfully.
Feb 02 11:13:06 compute-0 podman[93123]: 2026-02-02 11:13:06.85006586 +0000 UTC m=+11.707015271 container remove b4b2180d836c951b4e59d47c0f4896343f141829e518a25b2d797e5768476187 (image=quay.io/ceph/ceph:v19, name=condescending_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:06 compute-0 systemd[1]: libpod-conmon-b4b2180d836c951b4e59d47c0f4896343f141829e518a25b2d797e5768476187.scope: Deactivated successfully.
Feb 02 11:13:06 compute-0 sudo[93120]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:06 compute-0 podman[93468]: 2026-02-02 11:13:06.985923852 +0000 UTC m=+0.065641233 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:13:06 compute-0 sudo[93509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cidohlyvlhtabftochsnwoxvzcommece ; /usr/bin/python3'
Feb 02 11:13:06 compute-0 sudo[93509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:07 compute-0 podman[93468]: 2026-02-02 11:13:07.112182047 +0000 UTC m=+0.191899428 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:13:07] ENGINE Bus STARTING
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:13:07] ENGINE Bus STARTING
Feb 02 11:13:07 compute-0 python3[93513]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:07 compute-0 podman[93527]: 2026-02-02 11:13:07.185859084 +0000 UTC m=+0.040292646 container create b5ef59e28a8a0f65c4e0e8e03f3e6163e83b16882ad0bf6bf1e2ccac1d147555 (image=quay.io/ceph/ceph:v19, name=ecstatic_wright, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:13:07 compute-0 systemd[1]: Started libpod-conmon-b5ef59e28a8a0f65c4e0e8e03f3e6163e83b16882ad0bf6bf1e2ccac1d147555.scope.
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:13:07] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:13:07] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:13:07] ENGINE Client ('192.168.122.100', 44636) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:13:07] ENGINE Client ('192.168.122.100', 44636) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:13:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928836a93c97cb6654e925d0df1a0e7f6ae63df2342dda80b464d33daf7ae5c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928836a93c97cb6654e925d0df1a0e7f6ae63df2342dda80b464d33daf7ae5c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928836a93c97cb6654e925d0df1a0e7f6ae63df2342dda80b464d33daf7ae5c9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:07 compute-0 podman[93527]: 2026-02-02 11:13:07.170405862 +0000 UTC m=+0.024839454 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:07 compute-0 podman[93527]: 2026-02-02 11:13:07.267587705 +0000 UTC m=+0.122021297 container init b5ef59e28a8a0f65c4e0e8e03f3e6163e83b16882ad0bf6bf1e2ccac1d147555 (image=quay.io/ceph/ceph:v19, name=ecstatic_wright, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:13:07 compute-0 podman[93527]: 2026-02-02 11:13:07.275177087 +0000 UTC m=+0.129610639 container start b5ef59e28a8a0f65c4e0e8e03f3e6163e83b16882ad0bf6bf1e2ccac1d147555 (image=quay.io/ceph/ceph:v19, name=ecstatic_wright, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:13:07 compute-0 podman[93527]: 2026-02-02 11:13:07.278288254 +0000 UTC m=+0.132721846 container attach b5ef59e28a8a0f65c4e0e8e03f3e6163e83b16882ad0bf6bf1e2ccac1d147555 (image=quay.io/ceph/ceph:v19, name=ecstatic_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:13:07] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:13:07] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:13:07] ENGINE Bus STARTED
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:13:07] ENGINE Bus STARTED
Feb 02 11:13:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:13:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:13:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:13:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:13:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 podman[93688]: 2026-02-02 11:13:07.572795875 +0000 UTC m=+0.057265630 container exec c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:13:07 compute-0 podman[93688]: 2026-02-02 11:13:07.583179485 +0000 UTC m=+0.067649220 container exec_died c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 02 11:13:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ecstatic_wright[93573]: Scheduled mds.cephfs update...
Feb 02 11:13:07 compute-0 sudo[93368]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:13:07 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Feb 02 11:13:07 compute-0 systemd[1]: libpod-b5ef59e28a8a0f65c4e0e8e03f3e6163e83b16882ad0bf6bf1e2ccac1d147555.scope: Deactivated successfully.
Feb 02 11:13:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Feb 02 11:13:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:13:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 podman[93728]: 2026-02-02 11:13:07.714360867 +0000 UTC m=+0.024496175 container died b5ef59e28a8a0f65c4e0e8e03f3e6163e83b16882ad0bf6bf1e2ccac1d147555 (image=quay.io/ceph/ceph:v19, name=ecstatic_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-928836a93c97cb6654e925d0df1a0e7f6ae63df2342dda80b464d33daf7ae5c9-merged.mount: Deactivated successfully.
Feb 02 11:13:07 compute-0 sudo[93738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:07 compute-0 podman[93728]: 2026-02-02 11:13:07.755225668 +0000 UTC m=+0.065360956 container remove b5ef59e28a8a0f65c4e0e8e03f3e6163e83b16882ad0bf6bf1e2ccac1d147555 (image=quay.io/ceph/ceph:v19, name=ecstatic_wright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:13:07 compute-0 sudo[93738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:07 compute-0 systemd[1]: libpod-conmon-b5ef59e28a8a0f65c4e0e8e03f3e6163e83b16882ad0bf6bf1e2ccac1d147555.scope: Deactivated successfully.
Feb 02 11:13:07 compute-0 sudo[93738]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:07 compute-0 sudo[93509]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:07 compute-0 ceph-mon[74676]: 7.1b scrub starts
Feb 02 11:13:07 compute-0 ceph-mon[74676]: 7.1b scrub ok
Feb 02 11:13:07 compute-0 ceph-mon[74676]: 6.1b scrub starts
Feb 02 11:13:07 compute-0 ceph-mon[74676]: 6.1b scrub ok
Feb 02 11:13:07 compute-0 ceph-mon[74676]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:07 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-mon[74676]: [02/Feb/2026:11:13:07] ENGINE Bus STARTING
Feb 02 11:13:07 compute-0 ceph-mon[74676]: [02/Feb/2026:11:13:07] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:13:07 compute-0 ceph-mon[74676]: [02/Feb/2026:11:13:07] ENGINE Client ('192.168.122.100', 44636) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:13:07 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-mon[74676]: 5.1c deep-scrub starts
Feb 02 11:13:07 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 ceph-mon[74676]: 5.1c deep-scrub ok
Feb 02 11:13:07 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:07 compute-0 sudo[93767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:13:07 compute-0 sudo[93767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:07 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Check health
Feb 02 11:13:07 compute-0 sudo[93826]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eueogeiybjjtykypgafznppvrperwskl ; /usr/bin/python3'
Feb 02 11:13:07 compute-0 sudo[93826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:08 compute-0 python3[93828]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:08 compute-0 podman[93840]: 2026-02-02 11:13:08.067203176 +0000 UTC m=+0.035285696 container create 775a0e98c32ffafc5716c7b3825304d21a6ed6cb3dd689c78d207af34f9b9c45 (image=quay.io/ceph/ceph:v19, name=crazy_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:08 compute-0 systemd[1]: Started libpod-conmon-775a0e98c32ffafc5716c7b3825304d21a6ed6cb3dd689c78d207af34f9b9c45.scope.
Feb 02 11:13:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e971593a637824079f1bc4d6a4d7524d88baf024ec1e77d7bf930d083ddd1fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e971593a637824079f1bc4d6a4d7524d88baf024ec1e77d7bf930d083ddd1fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e971593a637824079f1bc4d6a4d7524d88baf024ec1e77d7bf930d083ddd1fa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:08 compute-0 podman[93840]: 2026-02-02 11:13:08.12931351 +0000 UTC m=+0.097396050 container init 775a0e98c32ffafc5716c7b3825304d21a6ed6cb3dd689c78d207af34f9b9c45 (image=quay.io/ceph/ceph:v19, name=crazy_nash, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:13:08 compute-0 podman[93840]: 2026-02-02 11:13:08.133654371 +0000 UTC m=+0.101736891 container start 775a0e98c32ffafc5716c7b3825304d21a6ed6cb3dd689c78d207af34f9b9c45 (image=quay.io/ceph/ceph:v19, name=crazy_nash, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Feb 02 11:13:08 compute-0 podman[93840]: 2026-02-02 11:13:08.137250602 +0000 UTC m=+0.105333122 container attach 775a0e98c32ffafc5716c7b3825304d21a6ed6cb3dd689c78d207af34f9b9c45 (image=quay.io/ceph/ceph:v19, name=crazy_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:13:08 compute-0 podman[93840]: 2026-02-02 11:13:08.051977981 +0000 UTC m=+0.020060521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:08 compute-0 sudo[93767]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:13:08 compute-0 sudo[93897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:08 compute-0 sudo[93897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:08 compute-0 sudo[93897]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:08 compute-0 sudo[93922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Feb 02 11:13:08 compute-0 sudo[93922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14526 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Feb 02 11:13:08 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb 02 11:13:08 compute-0 sudo[93922]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:08 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.dhyzzj(active, since 3s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:13:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:13:08 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:13:08 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:13:08 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:13:08 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:13:08 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:13:08 compute-0 ceph-mon[74676]: [02/Feb/2026:11:13:07] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:13:08 compute-0 ceph-mon[74676]: [02/Feb/2026:11:13:07] ENGINE Bus STARTED
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mon[74676]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:08 compute-0 ceph-mon[74676]: 3.1e scrub starts
Feb 02 11:13:08 compute-0 ceph-mon[74676]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:13:08 compute-0 ceph-mon[74676]: 3.1e scrub ok
Feb 02 11:13:08 compute-0 ceph-mon[74676]: 4.19 scrub starts
Feb 02 11:13:08 compute-0 ceph-mon[74676]: 4.19 scrub ok
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mon[74676]: mgrmap e24: compute-0.dhyzzj(active, since 3s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:13:08 compute-0 sudo[93968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 11:13:08 compute-0 sudo[93968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:08 compute-0 sudo[93968]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:08 compute-0 sudo[93993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph
Feb 02 11:13:08 compute-0 sudo[93993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:08 compute-0 sudo[93993]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:08 compute-0 sudo[94018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:13:08 compute-0 sudo[94018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:08 compute-0 sudo[94018]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:08 compute-0 sudo[94043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:13:08 compute-0 sudo[94043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:08 compute-0 sudo[94043]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 sudo[94068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:13:09 compute-0 sudo[94068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94068]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 sudo[94116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:13:09 compute-0 sudo[94116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:13:09 compute-0 sudo[94116]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 sudo[94141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:13:09 compute-0 sudo[94141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94141]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 sudo[94166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 02 11:13:09 compute-0 sudo[94166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94166]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:13:09 compute-0 sudo[94191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:13:09 compute-0 sudo[94191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94191]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 sudo[94216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:13:09 compute-0 sudo[94216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94216]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 sudo[94241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:13:09 compute-0 sudo[94241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94241]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Feb 02 11:13:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Feb 02 11:13:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Feb 02 11:13:09 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Feb 02 11:13:09 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 50 pg[12.0( empty local-lis/les=0/0 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [1] r=0 lpr=50 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:13:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Feb 02 11:13:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Feb 02 11:13:09 compute-0 sudo[94266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:13:09 compute-0 sudo[94266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94266]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:13:09 compute-0 sudo[94291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:13:09 compute-0 sudo[94291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94291]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 sudo[94339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:13:09 compute-0 sudo[94339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94339]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 sudo[94364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:13:09 compute-0 sudo[94364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94364]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v7: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:13:09 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb 02 11:13:09 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb 02 11:13:09 compute-0 sudo[94389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:13:09 compute-0 sudo[94389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94389]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:13:09 compute-0 sudo[94414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 11:13:09 compute-0 sudo[94414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94414]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 ceph-mon[74676]: from='client.14526 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:13:09 compute-0 ceph-mon[74676]: 5.f scrub starts
Feb 02 11:13:09 compute-0 ceph-mon[74676]: 5.f scrub ok
Feb 02 11:13:09 compute-0 ceph-mon[74676]: 5.1d scrub starts
Feb 02 11:13:09 compute-0 ceph-mon[74676]: 5.1d scrub ok
Feb 02 11:13:09 compute-0 ceph-mon[74676]: 2.15 scrub starts
Feb 02 11:13:09 compute-0 ceph-mon[74676]: Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:13:09 compute-0 ceph-mon[74676]: Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:13:09 compute-0 ceph-mon[74676]: Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:13:09 compute-0 ceph-mon[74676]: 2.15 scrub ok
Feb 02 11:13:09 compute-0 ceph-mon[74676]: Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:13:09 compute-0 ceph-mon[74676]: Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:13:09 compute-0 ceph-mon[74676]: Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:13:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Feb 02 11:13:09 compute-0 ceph-mon[74676]: osdmap e50: 3 total, 3 up, 3 in
Feb 02 11:13:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Feb 02 11:13:09 compute-0 sudo[94439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph
Feb 02 11:13:09 compute-0 sudo[94439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94439]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 sudo[94464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:13:09 compute-0 sudo[94464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94464]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 sudo[94489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:13:09 compute-0 sudo[94489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94489]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:13:09 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:13:09 compute-0 sudo[94514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:13:09 compute-0 sudo[94514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:09 compute-0 sudo[94514]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 sudo[94562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:13:10 compute-0 sudo[94562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94562]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 sudo[94587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:13:10 compute-0 sudo[94587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94587]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 sudo[94612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 sudo[94612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94612]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 sudo[94637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:13:10 compute-0 sudo[94637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94637]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 sudo[94662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:13:10 compute-0 sudo[94662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94662]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 sudo[94687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:13:10 compute-0 sudo[94687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94687]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 sudo[94712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:13:10 compute-0 sudo[94712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94712]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 sudo[94737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:13:10 compute-0 sudo[94737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94737]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 sudo[94785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:13:10 compute-0 sudo[94785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94785]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Feb 02 11:13:10 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 51 pg[12.0( empty local-lis/les=50/51 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [1] r=0 lpr=50 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:13:10 compute-0 sudo[94810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:13:10 compute-0 sudo[94810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94810]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.dhyzzj(active, since 4s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 sudo[94845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 sudo[94845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:10 compute-0 sudo[94845]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:13:10 compute-0 systemd[1]: libpod-775a0e98c32ffafc5716c7b3825304d21a6ed6cb3dd689c78d207af34f9b9c45.scope: Deactivated successfully.
Feb 02 11:13:10 compute-0 podman[93840]: 2026-02-02 11:13:10.593711763 +0000 UTC m=+2.561794283 container died 775a0e98c32ffafc5716c7b3825304d21a6ed6cb3dd689c78d207af34f9b9c45 (image=quay.io/ceph/ceph:v19, name=crazy_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:13:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e971593a637824079f1bc4d6a4d7524d88baf024ec1e77d7bf930d083ddd1fa-merged.mount: Deactivated successfully.
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 podman[93840]: 2026-02-02 11:13:10.645270803 +0000 UTC m=+2.613353333 container remove 775a0e98c32ffafc5716c7b3825304d21a6ed6cb3dd689c78d207af34f9b9c45 (image=quay.io/ceph/ceph:v19, name=crazy_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:13:10 compute-0 systemd[1]: libpod-conmon-775a0e98c32ffafc5716c7b3825304d21a6ed6cb3dd689c78d207af34f9b9c45.scope: Deactivated successfully.
Feb 02 11:13:10 compute-0 sudo[93826]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:13:10 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:13:10 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb 02 11:13:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev b5439483-ffaa-4858-a921-1c18edfa3a68 (Updating node-exporter deployment (+1 -> 3))
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Feb 02 11:13:10 compute-0 ceph-mgr[74969]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Feb 02 11:13:10 compute-0 ceph-mon[74676]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 ceph-mon[74676]: 3.2 scrub starts
Feb 02 11:13:10 compute-0 ceph-mon[74676]: 3.2 scrub ok
Feb 02 11:13:10 compute-0 ceph-mon[74676]: pgmap v7: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:13:10 compute-0 ceph-mon[74676]: 4.e scrub starts
Feb 02 11:13:10 compute-0 ceph-mon[74676]: 4.e scrub ok
Feb 02 11:13:10 compute-0 ceph-mon[74676]: 6.1c scrub starts
Feb 02 11:13:10 compute-0 ceph-mon[74676]: 6.1c scrub ok
Feb 02 11:13:10 compute-0 ceph-mon[74676]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 ceph-mon[74676]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 ceph-mon[74676]: Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 ceph-mon[74676]: Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 ceph-mon[74676]: Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:13:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb 02 11:13:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Feb 02 11:13:10 compute-0 ceph-mon[74676]: osdmap e51: 3 total, 3 up, 3 in
Feb 02 11:13:10 compute-0 ceph-mon[74676]: mgrmap e25: compute-0.dhyzzj(active, since 4s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:13:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Feb 02 11:13:11 compute-0 sudo[94956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-attspspliujchicbjmdqjqafmqfefcxb ; /usr/bin/python3'
Feb 02 11:13:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Feb 02 11:13:11 compute-0 sudo[94956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:11 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Feb 02 11:13:11 compute-0 python3[94958]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 11:13:11 compute-0 sudo[94956]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v10: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:13:11 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb 02 11:13:11 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb 02 11:13:11 compute-0 sudo[95029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwssehyffcidgirhfcdsxykiszmlqiir ; /usr/bin/python3'
Feb 02 11:13:11 compute-0 sudo[95029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:11 compute-0 ceph-mon[74676]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:11 compute-0 ceph-mon[74676]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb 02 11:13:11 compute-0 ceph-mon[74676]: 3.4 scrub starts
Feb 02 11:13:11 compute-0 ceph-mon[74676]: 3.4 scrub ok
Feb 02 11:13:11 compute-0 ceph-mon[74676]: 5.1 scrub starts
Feb 02 11:13:11 compute-0 ceph-mon[74676]: 7.1d deep-scrub starts
Feb 02 11:13:11 compute-0 ceph-mon[74676]: 5.1 scrub ok
Feb 02 11:13:11 compute-0 ceph-mon[74676]: 7.1d deep-scrub ok
Feb 02 11:13:11 compute-0 ceph-mon[74676]: Deploying daemon node-exporter.compute-2 on compute-2
Feb 02 11:13:11 compute-0 ceph-mon[74676]: osdmap e52: 3 total, 3 up, 3 in
Feb 02 11:13:11 compute-0 python3[95031]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770030791.3990653-37563-44590275376028/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=007d0511578188361071d471ee7ce6e57b71b01c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:13:11 compute-0 sudo[95029]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:12 compute-0 sudo[95079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lacqrwwwpsrgzpmuiirhlhvchbxetnrw ; /usr/bin/python3'
Feb 02 11:13:12 compute-0 sudo[95079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:12 compute-0 python3[95081]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:12 compute-0 podman[95082]: 2026-02-02 11:13:12.42312641 +0000 UTC m=+0.037853867 container create a328b52999693df1c84fcba3117ee28ea88f01f9d2a5b74f75d361ab58902085 (image=quay.io/ceph/ceph:v19, name=inspiring_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:13:12 compute-0 systemd[1]: Started libpod-conmon-a328b52999693df1c84fcba3117ee28ea88f01f9d2a5b74f75d361ab58902085.scope.
Feb 02 11:13:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af91d247ebc19c11394dc0517caff9d59d06c17448bb2b7edf7e5877a2c5b4a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af91d247ebc19c11394dc0517caff9d59d06c17448bb2b7edf7e5877a2c5b4a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:12 compute-0 podman[95082]: 2026-02-02 11:13:12.490874161 +0000 UTC m=+0.105601638 container init a328b52999693df1c84fcba3117ee28ea88f01f9d2a5b74f75d361ab58902085 (image=quay.io/ceph/ceph:v19, name=inspiring_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:13:12 compute-0 podman[95082]: 2026-02-02 11:13:12.4961906 +0000 UTC m=+0.110918057 container start a328b52999693df1c84fcba3117ee28ea88f01f9d2a5b74f75d361ab58902085 (image=quay.io/ceph/ceph:v19, name=inspiring_brattain, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:12 compute-0 podman[95082]: 2026-02-02 11:13:12.499326577 +0000 UTC m=+0.114054154 container attach a328b52999693df1c84fcba3117ee28ea88f01f9d2a5b74f75d361ab58902085 (image=quay.io/ceph/ceph:v19, name=inspiring_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:13:12 compute-0 podman[95082]: 2026-02-02 11:13:12.40378882 +0000 UTC m=+0.018516307 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:12 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Feb 02 11:13:12 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.dhyzzj(active, since 6s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:13:12 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb 02 11:13:12 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb 02 11:13:12 compute-0 ceph-mon[74676]: 5.5 scrub starts
Feb 02 11:13:12 compute-0 ceph-mon[74676]: 5.5 scrub ok
Feb 02 11:13:12 compute-0 ceph-mon[74676]: pgmap v10: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:13:12 compute-0 ceph-mon[74676]: 6.1 scrub starts
Feb 02 11:13:12 compute-0 ceph-mon[74676]: 6.1 scrub ok
Feb 02 11:13:12 compute-0 ceph-mon[74676]: 4.18 scrub starts
Feb 02 11:13:12 compute-0 ceph-mon[74676]: 4.18 scrub ok
Feb 02 11:13:12 compute-0 ceph-mon[74676]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Feb 02 11:13:12 compute-0 ceph-mon[74676]: mgrmap e26: compute-0.dhyzzj(active, since 6s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:13:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Feb 02 11:13:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1133502423' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Feb 02 11:13:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1133502423' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb 02 11:13:12 compute-0 systemd[1]: libpod-a328b52999693df1c84fcba3117ee28ea88f01f9d2a5b74f75d361ab58902085.scope: Deactivated successfully.
Feb 02 11:13:12 compute-0 podman[95082]: 2026-02-02 11:13:12.945367278 +0000 UTC m=+0.560094735 container died a328b52999693df1c84fcba3117ee28ea88f01f9d2a5b74f75d361ab58902085 (image=quay.io/ceph/ceph:v19, name=inspiring_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:13:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8af91d247ebc19c11394dc0517caff9d59d06c17448bb2b7edf7e5877a2c5b4a-merged.mount: Deactivated successfully.
Feb 02 11:13:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:13:12 compute-0 podman[95082]: 2026-02-02 11:13:12.977787043 +0000 UTC m=+0.592514500 container remove a328b52999693df1c84fcba3117ee28ea88f01f9d2a5b74f75d361ab58902085 (image=quay.io/ceph/ceph:v19, name=inspiring_brattain, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:13:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Feb 02 11:13:12 compute-0 systemd[1]: libpod-conmon-a328b52999693df1c84fcba3117ee28ea88f01f9d2a5b74f75d361ab58902085.scope: Deactivated successfully.
Feb 02 11:13:12 compute-0 sudo[95079]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:12 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev b5439483-ffaa-4858-a921-1c18edfa3a68 (Updating node-exporter deployment (+1 -> 3))
Feb 02 11:13:12 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event b5439483-ffaa-4858-a921-1c18edfa3a68 (Updating node-exporter deployment (+1 -> 3)) in 2 seconds
Feb 02 11:13:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Feb 02 11:13:13 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:13:13 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:13:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:13:13 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:13:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:13 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:13 compute-0 sudo[95133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:13 compute-0 sudo[95133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:13 compute-0 sudo[95133]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:13 compute-0 sudo[95158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:13:13 compute-0 sudo[95158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:13 compute-0 podman[95222]: 2026-02-02 11:13:13.443851754 +0000 UTC m=+0.037657272 container create 4a77a31170b7afd2e753a77b63f4df3dff9136b768d209919ad78e9dcdeab02c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_euler, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:13:13 compute-0 systemd[1]: Started libpod-conmon-4a77a31170b7afd2e753a77b63f4df3dff9136b768d209919ad78e9dcdeab02c.scope.
Feb 02 11:13:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:13 compute-0 podman[95222]: 2026-02-02 11:13:13.493055707 +0000 UTC m=+0.086861245 container init 4a77a31170b7afd2e753a77b63f4df3dff9136b768d209919ad78e9dcdeab02c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_euler, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:13:13 compute-0 podman[95222]: 2026-02-02 11:13:13.496677059 +0000 UTC m=+0.090482577 container start 4a77a31170b7afd2e753a77b63f4df3dff9136b768d209919ad78e9dcdeab02c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:13:13 compute-0 podman[95222]: 2026-02-02 11:13:13.499336523 +0000 UTC m=+0.093142041 container attach 4a77a31170b7afd2e753a77b63f4df3dff9136b768d209919ad78e9dcdeab02c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_euler, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:13:13 compute-0 crazy_euler[95239]: 167 167
Feb 02 11:13:13 compute-0 systemd[1]: libpod-4a77a31170b7afd2e753a77b63f4df3dff9136b768d209919ad78e9dcdeab02c.scope: Deactivated successfully.
Feb 02 11:13:13 compute-0 podman[95222]: 2026-02-02 11:13:13.500860015 +0000 UTC m=+0.094665603 container died 4a77a31170b7afd2e753a77b63f4df3dff9136b768d209919ad78e9dcdeab02c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_euler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e973e2e883ee9aafbb9e81ef77739c181fcc55526f60769645801ec1d7d4ed91-merged.mount: Deactivated successfully.
Feb 02 11:13:13 compute-0 podman[95222]: 2026-02-02 11:13:13.429291267 +0000 UTC m=+0.023096805 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:13:13 compute-0 podman[95222]: 2026-02-02 11:13:13.538384573 +0000 UTC m=+0.132190111 container remove 4a77a31170b7afd2e753a77b63f4df3dff9136b768d209919ad78e9dcdeab02c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:13:13 compute-0 systemd[1]: libpod-conmon-4a77a31170b7afd2e753a77b63f4df3dff9136b768d209919ad78e9dcdeab02c.scope: Deactivated successfully.
Feb 02 11:13:13 compute-0 sudo[95278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnppzwqoukooagdzrtgsaxesieivsfxl ; /usr/bin/python3'
Feb 02 11:13:13 compute-0 sudo[95278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:13 compute-0 podman[95288]: 2026-02-02 11:13:13.648517987 +0000 UTC m=+0.040523692 container create 3d0ca64c3b97364289125a8889bf12ec93e733660f1d16725b396ccbc7f3b80c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:13 compute-0 python3[95282]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:13 compute-0 systemd[1]: Started libpod-conmon-3d0ca64c3b97364289125a8889bf12ec93e733660f1d16725b396ccbc7f3b80c.scope.
Feb 02 11:13:13 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Feb 02 11:13:13 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Feb 02 11:13:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7e2d28e9230d0de9242253c95a8d3d00aa67114feeaa6da18b898afe6d014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7e2d28e9230d0de9242253c95a8d3d00aa67114feeaa6da18b898afe6d014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7e2d28e9230d0de9242253c95a8d3d00aa67114feeaa6da18b898afe6d014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7e2d28e9230d0de9242253c95a8d3d00aa67114feeaa6da18b898afe6d014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7e2d28e9230d0de9242253c95a8d3d00aa67114feeaa6da18b898afe6d014/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Feb 02 11:13:13 compute-0 podman[95305]: 2026-02-02 11:13:13.722660087 +0000 UTC m=+0.040465521 container create ba7b5c6d1afbd2325baa29c97b2605dc84b2b39e7b9e1a9b5aff2b3e4ea540fa (image=quay.io/ceph/ceph:v19, name=ecstatic_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:13:13 compute-0 podman[95288]: 2026-02-02 11:13:13.628190569 +0000 UTC m=+0.020196264 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:13:13 compute-0 podman[95288]: 2026-02-02 11:13:13.738941691 +0000 UTC m=+0.130947386 container init 3d0ca64c3b97364289125a8889bf12ec93e733660f1d16725b396ccbc7f3b80c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb 02 11:13:13 compute-0 podman[95288]: 2026-02-02 11:13:13.744199148 +0000 UTC m=+0.136204833 container start 3d0ca64c3b97364289125a8889bf12ec93e733660f1d16725b396ccbc7f3b80c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_robinson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 11:13:13 compute-0 podman[95288]: 2026-02-02 11:13:13.747563292 +0000 UTC m=+0.139569007 container attach 3d0ca64c3b97364289125a8889bf12ec93e733660f1d16725b396ccbc7f3b80c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_robinson, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:13 compute-0 systemd[1]: Started libpod-conmon-ba7b5c6d1afbd2325baa29c97b2605dc84b2b39e7b9e1a9b5aff2b3e4ea540fa.scope.
Feb 02 11:13:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ff0897711b6fb21c412e22636d5f4c33190cb72bd99ae56d43eae7b661b877/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ff0897711b6fb21c412e22636d5f4c33190cb72bd99ae56d43eae7b661b877/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:13 compute-0 podman[95305]: 2026-02-02 11:13:13.798089093 +0000 UTC m=+0.115894527 container init ba7b5c6d1afbd2325baa29c97b2605dc84b2b39e7b9e1a9b5aff2b3e4ea540fa (image=quay.io/ceph/ceph:v19, name=ecstatic_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:13:13 compute-0 podman[95305]: 2026-02-02 11:13:13.801882188 +0000 UTC m=+0.119687622 container start ba7b5c6d1afbd2325baa29c97b2605dc84b2b39e7b9e1a9b5aff2b3e4ea540fa (image=quay.io/ceph/ceph:v19, name=ecstatic_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:13:13 compute-0 podman[95305]: 2026-02-02 11:13:13.704191961 +0000 UTC m=+0.021997425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:13 compute-0 podman[95305]: 2026-02-02 11:13:13.805352995 +0000 UTC m=+0.123158429 container attach ba7b5c6d1afbd2325baa29c97b2605dc84b2b39e7b9e1a9b5aff2b3e4ea540fa (image=quay.io/ceph/ceph:v19, name=ecstatic_keldysh, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:13:13 compute-0 ceph-mon[74676]: 3.3 scrub starts
Feb 02 11:13:13 compute-0 ceph-mon[74676]: 3.3 scrub ok
Feb 02 11:13:13 compute-0 ceph-mon[74676]: 3.1 scrub starts
Feb 02 11:13:13 compute-0 ceph-mon[74676]: 3.1 scrub ok
Feb 02 11:13:13 compute-0 ceph-mon[74676]: 7.5 scrub starts
Feb 02 11:13:13 compute-0 ceph-mon[74676]: 7.5 scrub ok
Feb 02 11:13:13 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1133502423' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Feb 02 11:13:13 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1133502423' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb 02 11:13:13 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:13 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:13 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:13 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:13 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:13:13 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:13:13 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:13 compute-0 trusting_robinson[95312]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:13:13 compute-0 trusting_robinson[95312]: --> All data devices are unavailable
Feb 02 11:13:14 compute-0 systemd[1]: libpod-3d0ca64c3b97364289125a8889bf12ec93e733660f1d16725b396ccbc7f3b80c.scope: Deactivated successfully.
Feb 02 11:13:14 compute-0 podman[95360]: 2026-02-02 11:13:14.066021002 +0000 UTC m=+0.024263929 container died 3d0ca64c3b97364289125a8889bf12ec93e733660f1d16725b396ccbc7f3b80c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7e7e2d28e9230d0de9242253c95a8d3d00aa67114feeaa6da18b898afe6d014-merged.mount: Deactivated successfully.
Feb 02 11:13:14 compute-0 podman[95360]: 2026-02-02 11:13:14.100169415 +0000 UTC m=+0.058412322 container remove 3d0ca64c3b97364289125a8889bf12ec93e733660f1d16725b396ccbc7f3b80c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_robinson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:14 compute-0 systemd[1]: libpod-conmon-3d0ca64c3b97364289125a8889bf12ec93e733660f1d16725b396ccbc7f3b80c.scope: Deactivated successfully.
Feb 02 11:13:14 compute-0 sudo[95158]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:14 compute-0 sudo[95375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:14 compute-0 sudo[95375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:14 compute-0 sudo[95375]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 02 11:13:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1994864780' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb 02 11:13:14 compute-0 ecstatic_keldysh[95327]: 
Feb 02 11:13:14 compute-0 ecstatic_keldysh[95327]: {"fsid":"1d33f80b-d6ca-501c-bac7-184379b89279","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":76,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":52,"num_osds":3,"num_up_osds":3,"osd_up_since":1770030751,"num_in_osds":3,"osd_in_since":1770030722,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":107511808,"bytes_avail":64304414720,"bytes_total":64411926528,"read_bytes_sec":30027,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2026-02-02T11:13:06:743064+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2026-02-02T11:12:48.536497+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.dhyzzj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.iybsjv":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.zebspe":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14382":{"start_epoch":4,"start_stamp":"2026-02-02T11:12:47.531533+0000","gid":14382,"addr":"192.168.122.100:0/4235536652","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.jqfvjy","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864296","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"84bebf22-a60c-4c1e-abe8-47242680dd4d","zone_name":"default","zonegroup_id":"294c55f9-f7f9-445d-9954-ab8641436668","zonegroup_name":"default"},"task_status":{}},"24151":{"start_epoch":5,"start_stamp":"2026-02-02T11:12:47.545389+0000","gid":24151,"addr":"192.168.122.102:0/4081522708","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.xfsamf","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864300","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"84bebf22-a60c-4c1e-abe8-47242680dd4d","zone_name":"default","zonegroup_id":"294c55f9-f7f9-445d-9954-ab8641436668","zonegroup_name":"default"},"task_status":{}},"24170":{"start_epoch":4,"start_stamp":"2026-02-02T11:12:47.541797+0000","gid":24170,"addr":"192.168.122.101:0/3032189850","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.jqjceq","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864292","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"84bebf22-a60c-4c1e-abe8-47242680dd4d","zone_name":"default","zonegroup_id":"294c55f9-f7f9-445d-9954-ab8641436668","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"fa3f36c7-7d82-4bf7-b01b-ec4efeb6353c":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Feb 02 11:13:14 compute-0 sudo[95400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:13:14 compute-0 sudo[95400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:14 compute-0 systemd[1]: libpod-ba7b5c6d1afbd2325baa29c97b2605dc84b2b39e7b9e1a9b5aff2b3e4ea540fa.scope: Deactivated successfully.
Feb 02 11:13:14 compute-0 podman[95305]: 2026-02-02 11:13:14.241413678 +0000 UTC m=+0.559219112 container died ba7b5c6d1afbd2325baa29c97b2605dc84b2b39e7b9e1a9b5aff2b3e4ea540fa (image=quay.io/ceph/ceph:v19, name=ecstatic_keldysh, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Feb 02 11:13:14 compute-0 podman[95305]: 2026-02-02 11:13:14.277527276 +0000 UTC m=+0.595332710 container remove ba7b5c6d1afbd2325baa29c97b2605dc84b2b39e7b9e1a9b5aff2b3e4ea540fa (image=quay.io/ceph/ceph:v19, name=ecstatic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:13:14 compute-0 systemd[1]: libpod-conmon-ba7b5c6d1afbd2325baa29c97b2605dc84b2b39e7b9e1a9b5aff2b3e4ea540fa.scope: Deactivated successfully.
Feb 02 11:13:14 compute-0 sudo[95278]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:14 compute-0 sudo[95462]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crcbqhripvaqoxwuglceahrlmfgpyqyw ; /usr/bin/python3'
Feb 02 11:13:14 compute-0 sudo[95462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7ff0897711b6fb21c412e22636d5f4c33190cb72bd99ae56d43eae7b661b877-merged.mount: Deactivated successfully.
Feb 02 11:13:14 compute-0 python3[95471]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:14 compute-0 podman[95505]: 2026-02-02 11:13:14.556518094 +0000 UTC m=+0.037873218 container create 5835716d9d7803f84dbd8bb307475b6fe81e56d4d0875f29eafefe93e74a5391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:13:14 compute-0 systemd[1]: Started libpod-conmon-5835716d9d7803f84dbd8bb307475b6fe81e56d4d0875f29eafefe93e74a5391.scope.
Feb 02 11:13:14 compute-0 podman[95519]: 2026-02-02 11:13:14.596244483 +0000 UTC m=+0.040005068 container create 158fcdece19c9645f529298c4bc15b1241052ecc467648c029e94434a355b813 (image=quay.io/ceph/ceph:v19, name=focused_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:13:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:14 compute-0 systemd[1]: Started libpod-conmon-158fcdece19c9645f529298c4bc15b1241052ecc467648c029e94434a355b813.scope.
Feb 02 11:13:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:14 compute-0 podman[95505]: 2026-02-02 11:13:14.629523422 +0000 UTC m=+0.110878566 container init 5835716d9d7803f84dbd8bb307475b6fe81e56d4d0875f29eafefe93e74a5391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 11:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632f19d327916cb0b6724bfa8c39144fe325c17e6a81e2284b81996b1f746e4a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632f19d327916cb0b6724bfa8c39144fe325c17e6a81e2284b81996b1f746e4a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:14 compute-0 podman[95505]: 2026-02-02 11:13:14.538594934 +0000 UTC m=+0.019950068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:13:14 compute-0 podman[95505]: 2026-02-02 11:13:14.641098675 +0000 UTC m=+0.122453799 container start 5835716d9d7803f84dbd8bb307475b6fe81e56d4d0875f29eafefe93e74a5391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sanderson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 11:13:14 compute-0 podman[95519]: 2026-02-02 11:13:14.641619019 +0000 UTC m=+0.085379624 container init 158fcdece19c9645f529298c4bc15b1241052ecc467648c029e94434a355b813 (image=quay.io/ceph/ceph:v19, name=focused_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:13:14 compute-0 podman[95505]: 2026-02-02 11:13:14.644472949 +0000 UTC m=+0.125828073 container attach 5835716d9d7803f84dbd8bb307475b6fe81e56d4d0875f29eafefe93e74a5391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:14 compute-0 podman[95519]: 2026-02-02 11:13:14.64594123 +0000 UTC m=+0.089701815 container start 158fcdece19c9645f529298c4bc15b1241052ecc467648c029e94434a355b813 (image=quay.io/ceph/ceph:v19, name=focused_mcclintock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Feb 02 11:13:14 compute-0 trusting_sanderson[95534]: 167 167
Feb 02 11:13:14 compute-0 systemd[1]: libpod-5835716d9d7803f84dbd8bb307475b6fe81e56d4d0875f29eafefe93e74a5391.scope: Deactivated successfully.
Feb 02 11:13:14 compute-0 podman[95519]: 2026-02-02 11:13:14.648995805 +0000 UTC m=+0.092756420 container attach 158fcdece19c9645f529298c4bc15b1241052ecc467648c029e94434a355b813 (image=quay.io/ceph/ceph:v19, name=focused_mcclintock, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:13:14 compute-0 podman[95505]: 2026-02-02 11:13:14.650628101 +0000 UTC m=+0.131983225 container died 5835716d9d7803f84dbd8bb307475b6fe81e56d4d0875f29eafefe93e74a5391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sanderson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-caa043f9933c1429c7072ad256df7bfe72f40a20b34f91fb35c50e058f2c3c91-merged.mount: Deactivated successfully.
Feb 02 11:13:14 compute-0 podman[95519]: 2026-02-02 11:13:14.582379996 +0000 UTC m=+0.026140611 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:14 compute-0 podman[95505]: 2026-02-02 11:13:14.680153675 +0000 UTC m=+0.161508799 container remove 5835716d9d7803f84dbd8bb307475b6fe81e56d4d0875f29eafefe93e74a5391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:13:14 compute-0 systemd[1]: libpod-conmon-5835716d9d7803f84dbd8bb307475b6fe81e56d4d0875f29eafefe93e74a5391.scope: Deactivated successfully.
Feb 02 11:13:14 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Feb 02 11:13:14 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Feb 02 11:13:14 compute-0 podman[95585]: 2026-02-02 11:13:14.790584478 +0000 UTC m=+0.032370865 container create 21fae627e50052f4a0f66217f604b9dfd3ddbd63c4f1611ff036fa18e74a74bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_villani, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:13:14 compute-0 systemd[1]: Started libpod-conmon-21fae627e50052f4a0f66217f604b9dfd3ddbd63c4f1611ff036fa18e74a74bc.scope.
Feb 02 11:13:14 compute-0 ceph-mon[74676]: 4.5 deep-scrub starts
Feb 02 11:13:14 compute-0 ceph-mon[74676]: 4.5 deep-scrub ok
Feb 02 11:13:14 compute-0 ceph-mon[74676]: pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Feb 02 11:13:14 compute-0 ceph-mon[74676]: 5.3 scrub starts
Feb 02 11:13:14 compute-0 ceph-mon[74676]: 6.1e scrub starts
Feb 02 11:13:14 compute-0 ceph-mon[74676]: 5.3 scrub ok
Feb 02 11:13:14 compute-0 ceph-mon[74676]: 6.1e scrub ok
Feb 02 11:13:14 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1994864780' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb 02 11:13:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f7c1eda447268b7ab455d6cc32cf6a2645e267cc6139af36fb56572acc4e32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f7c1eda447268b7ab455d6cc32cf6a2645e267cc6139af36fb56572acc4e32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f7c1eda447268b7ab455d6cc32cf6a2645e267cc6139af36fb56572acc4e32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f7c1eda447268b7ab455d6cc32cf6a2645e267cc6139af36fb56572acc4e32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:14 compute-0 podman[95585]: 2026-02-02 11:13:14.776461104 +0000 UTC m=+0.018247511 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:13:14 compute-0 podman[95585]: 2026-02-02 11:13:14.874273544 +0000 UTC m=+0.116059961 container init 21fae627e50052f4a0f66217f604b9dfd3ddbd63c4f1611ff036fa18e74a74bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_villani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Feb 02 11:13:14 compute-0 podman[95585]: 2026-02-02 11:13:14.881867986 +0000 UTC m=+0.123654363 container start 21fae627e50052f4a0f66217f604b9dfd3ddbd63c4f1611ff036fa18e74a74bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:13:14 compute-0 podman[95585]: 2026-02-02 11:13:14.884528151 +0000 UTC m=+0.126314538 container attach 21fae627e50052f4a0f66217f604b9dfd3ddbd63c4f1611ff036fa18e74a74bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:13:15 compute-0 focused_mcclintock[95540]: 
Feb 02 11:13:15 compute-0 focused_mcclintock[95540]: {"epoch":3,"fsid":"1d33f80b-d6ca-501c-bac7-184379b89279","modified":"2026-02-02T11:11:52.805819Z","created":"2026-02-02T11:09:56.920509Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Feb 02 11:13:15 compute-0 focused_mcclintock[95540]: dumped monmap epoch 3
Feb 02 11:13:15 compute-0 systemd[1]: libpod-158fcdece19c9645f529298c4bc15b1241052ecc467648c029e94434a355b813.scope: Deactivated successfully.
Feb 02 11:13:15 compute-0 podman[95519]: 2026-02-02 11:13:15.088979388 +0000 UTC m=+0.532739983 container died 158fcdece19c9645f529298c4bc15b1241052ecc467648c029e94434a355b813 (image=quay.io/ceph/ceph:v19, name=focused_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:13:15 compute-0 podman[95519]: 2026-02-02 11:13:15.12666399 +0000 UTC m=+0.570424575 container remove 158fcdece19c9645f529298c4bc15b1241052ecc467648c029e94434a355b813 (image=quay.io/ceph/ceph:v19, name=focused_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:13:15 compute-0 systemd[1]: libpod-conmon-158fcdece19c9645f529298c4bc15b1241052ecc467648c029e94434a355b813.scope: Deactivated successfully.
Feb 02 11:13:15 compute-0 sudo[95462]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:15 compute-0 inspiring_villani[95601]: {
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:     "1": [
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:         {
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "devices": [
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "/dev/loop3"
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             ],
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "lv_name": "ceph_lv0",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "lv_size": "21470642176",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "name": "ceph_lv0",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "tags": {
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.cluster_name": "ceph",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.crush_device_class": "",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.encrypted": "0",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.osd_id": "1",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.type": "block",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.vdo": "0",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:                 "ceph.with_tpm": "0"
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             },
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "type": "block",
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:             "vg_name": "ceph_vg0"
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:         }
Feb 02 11:13:15 compute-0 inspiring_villani[95601]:     ]
Feb 02 11:13:15 compute-0 inspiring_villani[95601]: }
Feb 02 11:13:15 compute-0 systemd[1]: libpod-21fae627e50052f4a0f66217f604b9dfd3ddbd63c4f1611ff036fa18e74a74bc.scope: Deactivated successfully.
Feb 02 11:13:15 compute-0 podman[95585]: 2026-02-02 11:13:15.183011143 +0000 UTC m=+0.424797530 container died 21fae627e50052f4a0f66217f604b9dfd3ddbd63c4f1611ff036fa18e74a74bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_villani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:13:15 compute-0 podman[95585]: 2026-02-02 11:13:15.213632258 +0000 UTC m=+0.455418645 container remove 21fae627e50052f4a0f66217f604b9dfd3ddbd63c4f1611ff036fa18e74a74bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_villani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:13:15 compute-0 systemd[1]: libpod-conmon-21fae627e50052f4a0f66217f604b9dfd3ddbd63c4f1611ff036fa18e74a74bc.scope: Deactivated successfully.
Feb 02 11:13:15 compute-0 sudo[95400]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:15 compute-0 sudo[95636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:15 compute-0 sudo[95636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:15 compute-0 sudo[95636]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:15 compute-0 sudo[95661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:13:15 compute-0 sudo[95661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1f7c1eda447268b7ab455d6cc32cf6a2645e267cc6139af36fb56572acc4e32-merged.mount: Deactivated successfully.
Feb 02 11:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-632f19d327916cb0b6724bfa8c39144fe325c17e6a81e2284b81996b1f746e4a-merged.mount: Deactivated successfully.
Feb 02 11:13:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:15 compute-0 sudo[95711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmystdgzgaervikikatoiiryqhtbmykp ; /usr/bin/python3'
Feb 02 11:13:15 compute-0 sudo[95711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:15 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb 02 11:13:15 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb 02 11:13:15 compute-0 python3[95721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 11 op/s
Feb 02 11:13:15 compute-0 podman[95752]: 2026-02-02 11:13:15.725949369 +0000 UTC m=+0.039480243 container create 93382dce07f0f2fbd9cc237ea87489f52cc22fde9e382caf3844b67b4fe38b91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_easley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:13:15 compute-0 systemd[1]: Started libpod-conmon-93382dce07f0f2fbd9cc237ea87489f52cc22fde9e382caf3844b67b4fe38b91.scope.
Feb 02 11:13:15 compute-0 podman[95766]: 2026-02-02 11:13:15.769503145 +0000 UTC m=+0.037199130 container create 2b8c2ecb05c1cc078ee76e8f14b816896ee7c2d43a5f75a2215a4f7bebc989bf (image=quay.io/ceph/ceph:v19, name=frosty_chatterjee, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb 02 11:13:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:15 compute-0 systemd[1]: Started libpod-conmon-2b8c2ecb05c1cc078ee76e8f14b816896ee7c2d43a5f75a2215a4f7bebc989bf.scope.
Feb 02 11:13:15 compute-0 podman[95752]: 2026-02-02 11:13:15.792032194 +0000 UTC m=+0.105563078 container init 93382dce07f0f2fbd9cc237ea87489f52cc22fde9e382caf3844b67b4fe38b91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_easley, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:13:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:15 compute-0 podman[95752]: 2026-02-02 11:13:15.795982434 +0000 UTC m=+0.109513308 container start 93382dce07f0f2fbd9cc237ea87489f52cc22fde9e382caf3844b67b4fe38b91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d20e76e823359087a98b59daba172ee910cece8c67917b4d6502917cbe180386/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d20e76e823359087a98b59daba172ee910cece8c67917b4d6502917cbe180386/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:15 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 13 completed events
Feb 02 11:13:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:13:15 compute-0 determined_easley[95781]: 167 167
Feb 02 11:13:15 compute-0 systemd[1]: libpod-93382dce07f0f2fbd9cc237ea87489f52cc22fde9e382caf3844b67b4fe38b91.scope: Deactivated successfully.
Feb 02 11:13:15 compute-0 podman[95752]: 2026-02-02 11:13:15.80549632 +0000 UTC m=+0.119027204 container attach 93382dce07f0f2fbd9cc237ea87489f52cc22fde9e382caf3844b67b4fe38b91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:13:15 compute-0 podman[95752]: 2026-02-02 11:13:15.711032873 +0000 UTC m=+0.024563777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:13:15 compute-0 podman[95752]: 2026-02-02 11:13:15.805901761 +0000 UTC m=+0.119432635 container died 93382dce07f0f2fbd9cc237ea87489f52cc22fde9e382caf3844b67b4fe38b91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb 02 11:13:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:15 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event fa3f36c7-7d82-4bf7-b01b-ec4efeb6353c (Global Recovery Event) in 5 seconds
Feb 02 11:13:15 compute-0 podman[95766]: 2026-02-02 11:13:15.81841219 +0000 UTC m=+0.086108185 container init 2b8c2ecb05c1cc078ee76e8f14b816896ee7c2d43a5f75a2215a4f7bebc989bf (image=quay.io/ceph/ceph:v19, name=frosty_chatterjee, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:13:15 compute-0 podman[95766]: 2026-02-02 11:13:15.823479832 +0000 UTC m=+0.091175817 container start 2b8c2ecb05c1cc078ee76e8f14b816896ee7c2d43a5f75a2215a4f7bebc989bf (image=quay.io/ceph/ceph:v19, name=frosty_chatterjee, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:13:15 compute-0 podman[95766]: 2026-02-02 11:13:15.827907425 +0000 UTC m=+0.095603410 container attach 2b8c2ecb05c1cc078ee76e8f14b816896ee7c2d43a5f75a2215a4f7bebc989bf (image=quay.io/ceph/ceph:v19, name=frosty_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-65a47e9a9bc7b6cdd4f7baef8b1ae1efc1268cf4dbe0a1405689aff5c3f10d56-merged.mount: Deactivated successfully.
Feb 02 11:13:15 compute-0 podman[95752]: 2026-02-02 11:13:15.843197001 +0000 UTC m=+0.156727875 container remove 93382dce07f0f2fbd9cc237ea87489f52cc22fde9e382caf3844b67b4fe38b91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Feb 02 11:13:15 compute-0 systemd[1]: libpod-conmon-93382dce07f0f2fbd9cc237ea87489f52cc22fde9e382caf3844b67b4fe38b91.scope: Deactivated successfully.
Feb 02 11:13:15 compute-0 podman[95766]: 2026-02-02 11:13:15.754347202 +0000 UTC m=+0.022043207 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:15 compute-0 ceph-mon[74676]: 5.7 scrub starts
Feb 02 11:13:15 compute-0 ceph-mon[74676]: 5.7 scrub ok
Feb 02 11:13:15 compute-0 ceph-mon[74676]: 7.1f scrub starts
Feb 02 11:13:15 compute-0 ceph-mon[74676]: 7.1f scrub ok
Feb 02 11:13:15 compute-0 ceph-mon[74676]: 3.6 scrub starts
Feb 02 11:13:15 compute-0 ceph-mon[74676]: 3.6 scrub ok
Feb 02 11:13:15 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3685075063' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:13:15 compute-0 ceph-mon[74676]: 5.2 scrub starts
Feb 02 11:13:15 compute-0 ceph-mon[74676]: 5.2 scrub ok
Feb 02 11:13:15 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:15 compute-0 podman[95810]: 2026-02-02 11:13:15.956493444 +0000 UTC m=+0.032928831 container create 19331129dfcbf17628eb880c599c845044a0e690fea319bf167005381a934112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_tu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:13:15 compute-0 systemd[1]: Started libpod-conmon-19331129dfcbf17628eb880c599c845044a0e690fea319bf167005381a934112.scope.
Feb 02 11:13:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28231afcb4e240c0d0d23d593a34e69d6202bcb87f95e014d94b1eecb627074f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28231afcb4e240c0d0d23d593a34e69d6202bcb87f95e014d94b1eecb627074f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28231afcb4e240c0d0d23d593a34e69d6202bcb87f95e014d94b1eecb627074f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28231afcb4e240c0d0d23d593a34e69d6202bcb87f95e014d94b1eecb627074f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:16 compute-0 podman[95810]: 2026-02-02 11:13:15.941222057 +0000 UTC m=+0.017657464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:13:16 compute-0 podman[95810]: 2026-02-02 11:13:16.040231701 +0000 UTC m=+0.116667118 container init 19331129dfcbf17628eb880c599c845044a0e690fea319bf167005381a934112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_tu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:13:16 compute-0 podman[95810]: 2026-02-02 11:13:16.047408731 +0000 UTC m=+0.123844119 container start 19331129dfcbf17628eb880c599c845044a0e690fea319bf167005381a934112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 11:13:16 compute-0 podman[95810]: 2026-02-02 11:13:16.051098915 +0000 UTC m=+0.127534352 container attach 19331129dfcbf17628eb880c599c845044a0e690fea319bf167005381a934112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_tu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:13:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Feb 02 11:13:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/326584623' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Feb 02 11:13:16 compute-0 frosty_chatterjee[95787]: [client.openstack]
Feb 02 11:13:16 compute-0 frosty_chatterjee[95787]:         key = AQDlhYBpAAAAABAAVlWxpfi06TnsRXPWuiAnKA==
Feb 02 11:13:16 compute-0 frosty_chatterjee[95787]:         caps mgr = "allow *"
Feb 02 11:13:16 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:13:16 compute-0 frosty_chatterjee[95787]:         caps mon = "profile rbd"
Feb 02 11:13:16 compute-0 frosty_chatterjee[95787]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Feb 02 11:13:16 compute-0 systemd[1]: libpod-2b8c2ecb05c1cc078ee76e8f14b816896ee7c2d43a5f75a2215a4f7bebc989bf.scope: Deactivated successfully.
Feb 02 11:13:16 compute-0 conmon[95787]: conmon 2b8c2ecb05c1cc078ee7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b8c2ecb05c1cc078ee76e8f14b816896ee7c2d43a5f75a2215a4f7bebc989bf.scope/container/memory.events
Feb 02 11:13:16 compute-0 podman[95766]: 2026-02-02 11:13:16.295190668 +0000 UTC m=+0.562886653 container died 2b8c2ecb05c1cc078ee76e8f14b816896ee7c2d43a5f75a2215a4f7bebc989bf (image=quay.io/ceph/ceph:v19, name=frosty_chatterjee, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:13:16 compute-0 podman[95766]: 2026-02-02 11:13:16.328280482 +0000 UTC m=+0.595976467 container remove 2b8c2ecb05c1cc078ee76e8f14b816896ee7c2d43a5f75a2215a4f7bebc989bf (image=quay.io/ceph/ceph:v19, name=frosty_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:16 compute-0 systemd[1]: libpod-conmon-2b8c2ecb05c1cc078ee76e8f14b816896ee7c2d43a5f75a2215a4f7bebc989bf.scope: Deactivated successfully.
Feb 02 11:13:16 compute-0 sudo[95711]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d20e76e823359087a98b59daba172ee910cece8c67917b4d6502917cbe180386-merged.mount: Deactivated successfully.
Feb 02 11:13:16 compute-0 lvm[95933]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:13:16 compute-0 lvm[95933]: VG ceph_vg0 finished
Feb 02 11:13:16 compute-0 peaceful_tu[95846]: {}
Feb 02 11:13:16 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 3.a scrub starts
Feb 02 11:13:16 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 3.a scrub ok
Feb 02 11:13:16 compute-0 systemd[1]: libpod-19331129dfcbf17628eb880c599c845044a0e690fea319bf167005381a934112.scope: Deactivated successfully.
Feb 02 11:13:16 compute-0 podman[95810]: 2026-02-02 11:13:16.755848568 +0000 UTC m=+0.832283955 container died 19331129dfcbf17628eb880c599c845044a0e690fea319bf167005381a934112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_tu, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-28231afcb4e240c0d0d23d593a34e69d6202bcb87f95e014d94b1eecb627074f-merged.mount: Deactivated successfully.
Feb 02 11:13:16 compute-0 podman[95810]: 2026-02-02 11:13:16.79639926 +0000 UTC m=+0.872834647 container remove 19331129dfcbf17628eb880c599c845044a0e690fea319bf167005381a934112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_tu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:13:16 compute-0 systemd[1]: libpod-conmon-19331129dfcbf17628eb880c599c845044a0e690fea319bf167005381a934112.scope: Deactivated successfully.
Feb 02 11:13:16 compute-0 sudo[95661]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:13:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:13:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:16 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 11a5cc3b-4902-48ec-8410-f4b35d34009c (Updating mds.cephfs deployment (+3 -> 3))
Feb 02 11:13:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mzpewh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 02 11:13:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mzpewh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb 02 11:13:16 compute-0 ceph-mon[74676]: pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 11 op/s
Feb 02 11:13:16 compute-0 ceph-mon[74676]: 7.a scrub starts
Feb 02 11:13:16 compute-0 ceph-mon[74676]: 5.6 scrub starts
Feb 02 11:13:16 compute-0 ceph-mon[74676]: 7.a scrub ok
Feb 02 11:13:16 compute-0 ceph-mon[74676]: 5.6 scrub ok
Feb 02 11:13:16 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/326584623' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Feb 02 11:13:16 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mzpewh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 02 11:13:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:16 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:16 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.mzpewh on compute-2
Feb 02 11:13:16 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.mzpewh on compute-2
Feb 02 11:13:17 compute-0 sudo[96093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nskchnutlfeoezecvvtjewbeokunpnuu ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770030797.1892824-37635-211786865441768/async_wrapper.py j186634503927 30 /home/zuul/.ansible/tmp/ansible-tmp-1770030797.1892824-37635-211786865441768/AnsiballZ_command.py _'
Feb 02 11:13:17 compute-0 sudo[96093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:17 compute-0 ansible-async_wrapper.py[96095]: Invoked with j186634503927 30 /home/zuul/.ansible/tmp/ansible-tmp-1770030797.1892824-37635-211786865441768/AnsiballZ_command.py _
Feb 02 11:13:17 compute-0 ansible-async_wrapper.py[96098]: Starting module and watcher
Feb 02 11:13:17 compute-0 ansible-async_wrapper.py[96098]: Start watching 96099 (30)
Feb 02 11:13:17 compute-0 ansible-async_wrapper.py[96099]: Start module (96099)
Feb 02 11:13:17 compute-0 ansible-async_wrapper.py[96095]: Return async_wrapper task started.
Feb 02 11:13:17 compute-0 sudo[96093]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Feb 02 11:13:17 compute-0 python3[96100]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:17 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Feb 02 11:13:17 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Feb 02 11:13:17 compute-0 podman[96101]: 2026-02-02 11:13:17.782532628 +0000 UTC m=+0.038846936 container create da8e1826c1b88dfc3d0ad93d92128459db62d195d0022b8c8bf74c90ceb2a1fc (image=quay.io/ceph/ceph:v19, name=suspicious_gauss, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:13:17 compute-0 systemd[1]: Started libpod-conmon-da8e1826c1b88dfc3d0ad93d92128459db62d195d0022b8c8bf74c90ceb2a1fc.scope.
Feb 02 11:13:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/477cd69f3617123846320e17f6c71a426d5c46b5be769c94eb25289aeda63f33/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/477cd69f3617123846320e17f6c71a426d5c46b5be769c94eb25289aeda63f33/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:17 compute-0 podman[96101]: 2026-02-02 11:13:17.767375684 +0000 UTC m=+0.023690012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:17 compute-0 podman[96101]: 2026-02-02 11:13:17.864465875 +0000 UTC m=+0.120780203 container init da8e1826c1b88dfc3d0ad93d92128459db62d195d0022b8c8bf74c90ceb2a1fc (image=quay.io/ceph/ceph:v19, name=suspicious_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:13:17 compute-0 podman[96101]: 2026-02-02 11:13:17.869617359 +0000 UTC m=+0.125931657 container start da8e1826c1b88dfc3d0ad93d92128459db62d195d0022b8c8bf74c90ceb2a1fc (image=quay.io/ceph/ceph:v19, name=suspicious_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb 02 11:13:17 compute-0 podman[96101]: 2026-02-02 11:13:17.87287577 +0000 UTC m=+0.129190108 container attach da8e1826c1b88dfc3d0ad93d92128459db62d195d0022b8c8bf74c90ceb2a1fc (image=quay.io/ceph/ceph:v19, name=suspicious_gauss, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:17 compute-0 ceph-mon[74676]: 3.a scrub starts
Feb 02 11:13:17 compute-0 ceph-mon[74676]: 3.a scrub ok
Feb 02 11:13:17 compute-0 ceph-mon[74676]: 5.c scrub starts
Feb 02 11:13:17 compute-0 ceph-mon[74676]: 7.14 scrub starts
Feb 02 11:13:17 compute-0 ceph-mon[74676]: 5.c scrub ok
Feb 02 11:13:17 compute-0 ceph-mon[74676]: 7.14 scrub ok
Feb 02 11:13:17 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:17 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mzpewh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb 02 11:13:17 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mzpewh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 02 11:13:17 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:17 compute-0 ceph-mon[74676]: Deploying daemon mds.cephfs.compute-2.mzpewh on compute-2
Feb 02 11:13:18 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14559 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:13:18 compute-0 suspicious_gauss[96116]: 
Feb 02 11:13:18 compute-0 suspicious_gauss[96116]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 02 11:13:18 compute-0 systemd[1]: libpod-da8e1826c1b88dfc3d0ad93d92128459db62d195d0022b8c8bf74c90ceb2a1fc.scope: Deactivated successfully.
Feb 02 11:13:18 compute-0 podman[96101]: 2026-02-02 11:13:18.249981347 +0000 UTC m=+0.506295665 container died da8e1826c1b88dfc3d0ad93d92128459db62d195d0022b8c8bf74c90ceb2a1fc (image=quay.io/ceph/ceph:v19, name=suspicious_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-477cd69f3617123846320e17f6c71a426d5c46b5be769c94eb25289aeda63f33-merged.mount: Deactivated successfully.
Feb 02 11:13:18 compute-0 podman[96101]: 2026-02-02 11:13:18.281961529 +0000 UTC m=+0.538275837 container remove da8e1826c1b88dfc3d0ad93d92128459db62d195d0022b8c8bf74c90ceb2a1fc (image=quay.io/ceph/ceph:v19, name=suspicious_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:13:18 compute-0 systemd[1]: libpod-conmon-da8e1826c1b88dfc3d0ad93d92128459db62d195d0022b8c8bf74c90ceb2a1fc.scope: Deactivated successfully.
Feb 02 11:13:18 compute-0 ansible-async_wrapper.py[96099]: Module complete (96099)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kwzngg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kwzngg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kwzngg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:18 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.kwzngg on compute-0
Feb 02 11:13:18 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.kwzngg on compute-0
Feb 02 11:13:18 compute-0 sudo[96153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:18 compute-0 sudo[96153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:18 compute-0 sudo[96153]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:18 compute-0 sudo[96178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:13:18 compute-0 sudo[96178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:18 compute-0 sudo[96249]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxsgqeqfhjqbukdkcohqcxgnfvfxhxar ; /usr/bin/python3'
Feb 02 11:13:18 compute-0 sudo[96249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:18 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 3.c scrub starts
Feb 02 11:13:18 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 3.c scrub ok
Feb 02 11:13:18 compute-0 python3[96258]: ansible-ansible.legacy.async_status Invoked with jid=j186634503927.96095 mode=status _async_dir=/root/.ansible_async
Feb 02 11:13:18 compute-0 sudo[96249]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:18 compute-0 podman[96292]: 2026-02-02 11:13:18.859969424 +0000 UTC m=+0.038762063 container create 9d98f65067317835683d9eb56f512152bc1dc3c1839b30bea99c1a8f6f907309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_margulis, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:13:18 compute-0 systemd[1]: Started libpod-conmon-9d98f65067317835683d9eb56f512152bc1dc3c1839b30bea99c1a8f6f907309.scope.
Feb 02 11:13:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:18 compute-0 podman[96292]: 2026-02-02 11:13:18.917017727 +0000 UTC m=+0.095810376 container init 9d98f65067317835683d9eb56f512152bc1dc3c1839b30bea99c1a8f6f907309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Feb 02 11:13:18 compute-0 ceph-mon[74676]: 3.5 deep-scrub starts
Feb 02 11:13:18 compute-0 ceph-mon[74676]: 3.5 deep-scrub ok
Feb 02 11:13:18 compute-0 ceph-mon[74676]: 7.16 scrub starts
Feb 02 11:13:18 compute-0 ceph-mon[74676]: 7.16 scrub ok
Feb 02 11:13:18 compute-0 ceph-mon[74676]: 3.7 deep-scrub starts
Feb 02 11:13:18 compute-0 ceph-mon[74676]: 3.7 deep-scrub ok
Feb 02 11:13:18 compute-0 ceph-mon[74676]: from='client.14559 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:13:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kwzngg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb 02 11:13:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kwzngg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 02 11:13:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:18 compute-0 podman[96292]: 2026-02-02 11:13:18.921970225 +0000 UTC m=+0.100762864 container start 9d98f65067317835683d9eb56f512152bc1dc3c1839b30bea99c1a8f6f907309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:13:18 compute-0 dazzling_margulis[96331]: 167 167
Feb 02 11:13:18 compute-0 systemd[1]: libpod-9d98f65067317835683d9eb56f512152bc1dc3c1839b30bea99c1a8f6f907309.scope: Deactivated successfully.
Feb 02 11:13:18 compute-0 podman[96292]: 2026-02-02 11:13:18.928560789 +0000 UTC m=+0.107353438 container attach 9d98f65067317835683d9eb56f512152bc1dc3c1839b30bea99c1a8f6f907309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:13:18 compute-0 podman[96292]: 2026-02-02 11:13:18.929155986 +0000 UTC m=+0.107948635 container died 9d98f65067317835683d9eb56f512152bc1dc3c1839b30bea99c1a8f6f907309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_margulis, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e3 new map
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-02-02T11:13:18:913196+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T11:13:06.742944+0000
                                           modified        2026-02-02T11:13:06.742944+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.mzpewh{-1:24250} state up:standby seq 1 addr [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] compat {c=[1],r=[1],i=[1fff]}]
Feb 02 11:13:18 compute-0 podman[96292]: 2026-02-02 11:13:18.841764746 +0000 UTC m=+0.020557405 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] up:boot
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] as mds.0
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.mzpewh assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.mzpewh"} v 0)
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.mzpewh"}]: dispatch
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e3 all = 0
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e4 new map
Feb 02 11:13:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-02-02T11:13:18:940446+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T11:13:06.742944+0000
                                           modified        2026-02-02T11:13:18.940439+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24250}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.mzpewh{0:24250} state up:creating seq 1 addr [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Feb 02 11:13:18 compute-0 sudo[96360]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtrhuvdlwsjhzjnllwusixihdrlbniwu ; /usr/bin/python3'
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:creating}
Feb 02 11:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8661a64696bbe14310d489b757ca30efb7d347933e15a6d78b56bb0e78b36b7f-merged.mount: Deactivated successfully.
Feb 02 11:13:18 compute-0 sudo[96360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:18 compute-0 podman[96292]: 2026-02-02 11:13:18.968172635 +0000 UTC m=+0.146965274 container remove 9d98f65067317835683d9eb56f512152bc1dc3c1839b30bea99c1a8f6f907309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:13:18 compute-0 systemd[1]: libpod-conmon-9d98f65067317835683d9eb56f512152bc1dc3c1839b30bea99c1a8f6f907309.scope: Deactivated successfully.
Feb 02 11:13:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.mzpewh is now active in filesystem cephfs as rank 0
Feb 02 11:13:19 compute-0 systemd[1]: Reloading.
Feb 02 11:13:19 compute-0 systemd-rc-local-generator[96401]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:13:19 compute-0 systemd-sysv-generator[96405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:13:19 compute-0 python3[96373]: ansible-ansible.legacy.async_status Invoked with jid=j186634503927.96095 mode=cleanup _async_dir=/root/.ansible_async
Feb 02 11:13:19 compute-0 sudo[96360]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:19 compute-0 systemd[1]: Reloading.
Feb 02 11:13:19 compute-0 systemd-sysv-generator[96446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:13:19 compute-0 systemd-rc-local-generator[96443]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:13:19 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.kwzngg for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:13:19 compute-0 sudo[96482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izxyqwwjygpmnvmmspiaubkxegpqvkdd ; /usr/bin/python3'
Feb 02 11:13:19 compute-0 sudo[96482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:19 compute-0 podman[96524]: 2026-02-02 11:13:19.623651942 +0000 UTC m=+0.035190104 container create 755352a25c5b12f1298f48320dfeabff41251cb6c2a7461fab89fd07192d7d3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mds-cephfs-compute-0-kwzngg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:13:19 compute-0 python3[96491]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf16f6922a47d7231040d03015fca18b31a44c3f66d4cd095e56d3ca28a33b8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf16f6922a47d7231040d03015fca18b31a44c3f66d4cd095e56d3ca28a33b8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf16f6922a47d7231040d03015fca18b31a44c3f66d4cd095e56d3ca28a33b8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf16f6922a47d7231040d03015fca18b31a44c3f66d4cd095e56d3ca28a33b8e/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.kwzngg supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:19 compute-0 podman[96524]: 2026-02-02 11:13:19.674265795 +0000 UTC m=+0.085803977 container init 755352a25c5b12f1298f48320dfeabff41251cb6c2a7461fab89fd07192d7d3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mds-cephfs-compute-0-kwzngg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:13:19 compute-0 podman[96524]: 2026-02-02 11:13:19.678718259 +0000 UTC m=+0.090256411 container start 755352a25c5b12f1298f48320dfeabff41251cb6c2a7461fab89fd07192d7d3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mds-cephfs-compute-0-kwzngg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 11:13:19 compute-0 bash[96524]: 755352a25c5b12f1298f48320dfeabff41251cb6c2a7461fab89fd07192d7d3d
Feb 02 11:13:19 compute-0 podman[96524]: 2026-02-02 11:13:19.60819886 +0000 UTC m=+0.019737062 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:13:19 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.kwzngg for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:13:19 compute-0 podman[96541]: 2026-02-02 11:13:19.706444013 +0000 UTC m=+0.047874168 container create 9c35370ec927a0938acb90b5feef2bfe891a04fcdb401151ea3d24e76ef4d80f (image=quay.io/ceph/ceph:v19, name=goofy_leakey, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:13:19 compute-0 ceph-mds[96554]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 11:13:19 compute-0 ceph-mds[96554]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Feb 02 11:13:19 compute-0 ceph-mds[96554]: main not setting numa affinity
Feb 02 11:13:19 compute-0 ceph-mds[96554]: pidfile_write: ignore empty --pid-file
Feb 02 11:13:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mds-cephfs-compute-0-kwzngg[96539]: starting mds.cephfs.compute-0.kwzngg at 
Feb 02 11:13:19 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Feb 02 11:13:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 7 op/s
Feb 02 11:13:19 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Updating MDS map to version 4 from mon.0
Feb 02 11:13:19 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Feb 02 11:13:19 compute-0 systemd[1]: Started libpod-conmon-9c35370ec927a0938acb90b5feef2bfe891a04fcdb401151ea3d24e76ef4d80f.scope.
Feb 02 11:13:19 compute-0 sudo[96178]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:13:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f3560f09eb9c0a96c834c030d4b4e65808b4b8dd18e739f4dbadcec16ed1371/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f3560f09eb9c0a96c834c030d4b4e65808b4b8dd18e739f4dbadcec16ed1371/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:19 compute-0 podman[96541]: 2026-02-02 11:13:19.683290347 +0000 UTC m=+0.024720532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:13:19 compute-0 podman[96541]: 2026-02-02 11:13:19.789272495 +0000 UTC m=+0.130702670 container init 9c35370ec927a0938acb90b5feef2bfe891a04fcdb401151ea3d24e76ef4d80f (image=quay.io/ceph/ceph:v19, name=goofy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 02 11:13:19 compute-0 podman[96541]: 2026-02-02 11:13:19.795659973 +0000 UTC m=+0.137090128 container start 9c35370ec927a0938acb90b5feef2bfe891a04fcdb401151ea3d24e76ef4d80f (image=quay.io/ceph/ceph:v19, name=goofy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb 02 11:13:19 compute-0 podman[96541]: 2026-02-02 11:13:19.798582805 +0000 UTC m=+0.140013170 container attach 9c35370ec927a0938acb90b5feef2bfe891a04fcdb401151ea3d24e76ef4d80f (image=quay.io/ceph/ceph:v19, name=goofy_leakey, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ajwnpf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ajwnpf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ajwnpf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:19 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.ajwnpf on compute-1
Feb 02 11:13:19 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.ajwnpf on compute-1
Feb 02 11:13:19 compute-0 ceph-mon[74676]: Deploying daemon mds.cephfs.compute-0.kwzngg on compute-0
Feb 02 11:13:19 compute-0 ceph-mon[74676]: 3.c scrub starts
Feb 02 11:13:19 compute-0 ceph-mon[74676]: 3.c scrub ok
Feb 02 11:13:19 compute-0 ceph-mon[74676]: 5.1e scrub starts
Feb 02 11:13:19 compute-0 ceph-mon[74676]: 5.1e scrub ok
Feb 02 11:13:19 compute-0 ceph-mon[74676]: 7.11 scrub starts
Feb 02 11:13:19 compute-0 ceph-mon[74676]: 7.11 scrub ok
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mds.? [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] up:boot
Feb 02 11:13:19 compute-0 ceph-mon[74676]: daemon mds.cephfs.compute-2.mzpewh assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb 02 11:13:19 compute-0 ceph-mon[74676]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb 02 11:13:19 compute-0 ceph-mon[74676]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb 02 11:13:19 compute-0 ceph-mon[74676]: Cluster is now healthy
Feb 02 11:13:19 compute-0 ceph-mon[74676]: fsmap cephfs:0 1 up:standby
Feb 02 11:13:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.mzpewh"}]: dispatch
Feb 02 11:13:19 compute-0 ceph-mon[74676]: fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:creating}
Feb 02 11:13:19 compute-0 ceph-mon[74676]: daemon mds.cephfs.compute-2.mzpewh is now active in filesystem cephfs as rank 0
Feb 02 11:13:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ajwnpf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb 02 11:13:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ajwnpf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 02 11:13:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e5 new map
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-02-02T11:13:19:955497+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T11:13:06.742944+0000
                                           modified        2026-02-02T11:13:19.955494+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24250}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24250 members: 24250
                                           [mds.cephfs.compute-2.mzpewh{0:24250} state up:active seq 2 addr [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kwzngg{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/3526018863,v1:192.168.122.100:6807/3526018863] compat {c=[1],r=[1],i=[1fff]}]
Feb 02 11:13:19 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Updating MDS map to version 5 from mon.0
Feb 02 11:13:19 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Monitors have assigned me to become a standby
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] up:active
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3526018863,v1:192.168.122.100:6807/3526018863] up:boot
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:active} 1 up:standby
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.kwzngg"} v 0)
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.kwzngg"}]: dispatch
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e5 all = 0
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e6 new map
Feb 02 11:13:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2026-02-02T11:13:19:978585+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T11:13:06.742944+0000
                                           modified        2026-02-02T11:13:19.955494+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24250}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24250 members: 24250
                                           [mds.cephfs.compute-2.mzpewh{0:24250} state up:active seq 2 addr [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kwzngg{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/3526018863,v1:192.168.122.100:6807/3526018863] compat {c=[1],r=[1],i=[1fff]}]
Feb 02 11:13:19 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:active} 1 up:standby
Feb 02 11:13:20 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14571 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:13:20 compute-0 goofy_leakey[96577]: 
Feb 02 11:13:20 compute-0 goofy_leakey[96577]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 02 11:13:20 compute-0 systemd[1]: libpod-9c35370ec927a0938acb90b5feef2bfe891a04fcdb401151ea3d24e76ef4d80f.scope: Deactivated successfully.
Feb 02 11:13:20 compute-0 podman[96541]: 2026-02-02 11:13:20.177750609 +0000 UTC m=+0.519180754 container died 9c35370ec927a0938acb90b5feef2bfe891a04fcdb401151ea3d24e76ef4d80f (image=quay.io/ceph/ceph:v19, name=goofy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f3560f09eb9c0a96c834c030d4b4e65808b4b8dd18e739f4dbadcec16ed1371-merged.mount: Deactivated successfully.
Feb 02 11:13:20 compute-0 podman[96541]: 2026-02-02 11:13:20.215461302 +0000 UTC m=+0.556891467 container remove 9c35370ec927a0938acb90b5feef2bfe891a04fcdb401151ea3d24e76ef4d80f (image=quay.io/ceph/ceph:v19, name=goofy_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:13:20 compute-0 systemd[1]: libpod-conmon-9c35370ec927a0938acb90b5feef2bfe891a04fcdb401151ea3d24e76ef4d80f.scope: Deactivated successfully.
Feb 02 11:13:20 compute-0 sudo[96482]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:20 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Feb 02 11:13:20 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Feb 02 11:13:20 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 14 completed events
Feb 02 11:13:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:13:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:20 compute-0 sudo[96638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbiabslmmvnimqvwwsbgfnnulpxfhgzn ; /usr/bin/python3'
Feb 02 11:13:20 compute-0 sudo[96638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:20 compute-0 python3[96640]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:20 compute-0 ceph-mon[74676]: 5.1f scrub starts
Feb 02 11:13:20 compute-0 ceph-mon[74676]: pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 7 op/s
Feb 02 11:13:20 compute-0 ceph-mon[74676]: 5.1f scrub ok
Feb 02 11:13:20 compute-0 ceph-mon[74676]: Deploying daemon mds.cephfs.compute-1.ajwnpf on compute-1
Feb 02 11:13:20 compute-0 ceph-mon[74676]: 2.19 scrub starts
Feb 02 11:13:20 compute-0 ceph-mon[74676]: 2.19 scrub ok
Feb 02 11:13:20 compute-0 ceph-mon[74676]: mds.? [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] up:active
Feb 02 11:13:20 compute-0 ceph-mon[74676]: mds.? [v2:192.168.122.100:6806/3526018863,v1:192.168.122.100:6807/3526018863] up:boot
Feb 02 11:13:20 compute-0 ceph-mon[74676]: fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:active} 1 up:standby
Feb 02 11:13:20 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.kwzngg"}]: dispatch
Feb 02 11:13:20 compute-0 ceph-mon[74676]: fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:active} 1 up:standby
Feb 02 11:13:20 compute-0 ceph-mon[74676]: from='client.14571 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:13:20 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:21 compute-0 podman[96641]: 2026-02-02 11:13:21.028244901 +0000 UTC m=+0.040184023 container create 8a687dc6a3594d89fb6f31824b826d0f14e3314068365dc5b805126c56db42d5 (image=quay.io/ceph/ceph:v19, name=goofy_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 11:13:21 compute-0 systemd[1]: Started libpod-conmon-8a687dc6a3594d89fb6f31824b826d0f14e3314068365dc5b805126c56db42d5.scope.
Feb 02 11:13:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b872f8ba30365ae912a0df229147bac5ebf31a504e8c78ebe1339a8c2c89f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b872f8ba30365ae912a0df229147bac5ebf31a504e8c78ebe1339a8c2c89f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:21 compute-0 podman[96641]: 2026-02-02 11:13:21.010402383 +0000 UTC m=+0.022341535 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:21 compute-0 podman[96641]: 2026-02-02 11:13:21.110166918 +0000 UTC m=+0.122106070 container init 8a687dc6a3594d89fb6f31824b826d0f14e3314068365dc5b805126c56db42d5 (image=quay.io/ceph/ceph:v19, name=goofy_jemison, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:13:21 compute-0 podman[96641]: 2026-02-02 11:13:21.114057617 +0000 UTC m=+0.125996739 container start 8a687dc6a3594d89fb6f31824b826d0f14e3314068365dc5b805126c56db42d5 (image=quay.io/ceph/ceph:v19, name=goofy_jemison, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb 02 11:13:21 compute-0 podman[96641]: 2026-02-02 11:13:21.117135182 +0000 UTC m=+0.129074294 container attach 8a687dc6a3594d89fb6f31824b826d0f14e3314068365dc5b805126c56db42d5 (image=quay.io/ceph/ceph:v19, name=goofy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 11a5cc3b-4902-48ec-8410-f4b35d34009c (Updating mds.cephfs deployment (+3 -> 3))
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 11a5cc3b-4902-48ec-8410-f4b35d34009c (Updating mds.cephfs deployment (+3 -> 3)) in 4 seconds
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev a885bb69-9854-4651-899f-dba599ed3db4 (Updating nfs.cephfs deployment (+3 -> 3))
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.jnnwjo
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.jnnwjo
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jnnwjo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jnnwjo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jnnwjo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.jnnwjo-rgw
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.jnnwjo-rgw
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jnnwjo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jnnwjo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jnnwjo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.jnnwjo's ganesha conf is defaulting to empty
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.jnnwjo's ganesha conf is defaulting to empty
Feb 02 11:13:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.jnnwjo on compute-1
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.jnnwjo on compute-1
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14577 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:13:21 compute-0 goofy_jemison[96657]: 
Feb 02 11:13:21 compute-0 goofy_jemison[96657]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Feb 02 11:13:21 compute-0 systemd[1]: libpod-8a687dc6a3594d89fb6f31824b826d0f14e3314068365dc5b805126c56db42d5.scope: Deactivated successfully.
Feb 02 11:13:21 compute-0 podman[96641]: 2026-02-02 11:13:21.499380433 +0000 UTC m=+0.511319555 container died 8a687dc6a3594d89fb6f31824b826d0f14e3314068365dc5b805126c56db42d5 (image=quay.io/ceph/ceph:v19, name=goofy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 11:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-99b872f8ba30365ae912a0df229147bac5ebf31a504e8c78ebe1339a8c2c89f0-merged.mount: Deactivated successfully.
Feb 02 11:13:21 compute-0 podman[96641]: 2026-02-02 11:13:21.539832112 +0000 UTC m=+0.551771234 container remove 8a687dc6a3594d89fb6f31824b826d0f14e3314068365dc5b805126c56db42d5 (image=quay.io/ceph/ceph:v19, name=goofy_jemison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:13:21 compute-0 systemd[1]: libpod-conmon-8a687dc6a3594d89fb6f31824b826d0f14e3314068365dc5b805126c56db42d5.scope: Deactivated successfully.
Feb 02 11:13:21 compute-0 sudo[96638]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s
Feb 02 11:13:21 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Feb 02 11:13:21 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Feb 02 11:13:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e7 new map
Feb 02 11:13:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2026-02-02T11:13:22:162392+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T11:13:06.742944+0000
                                           modified        2026-02-02T11:13:19.955494+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24250}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24250 members: 24250
                                           [mds.cephfs.compute-2.mzpewh{0:24250} state up:active seq 2 addr [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kwzngg{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/3526018863,v1:192.168.122.100:6807/3526018863] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.ajwnpf{-1:24278} state up:standby seq 1 addr [v2:192.168.122.101:6804/2551729492,v1:192.168.122.101:6805/2551729492] compat {c=[1],r=[1],i=[1fff]}]
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2551729492,v1:192.168.122.101:6805/2551729492] up:boot
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:active} 2 up:standby
Feb 02 11:13:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.ajwnpf"} v 0)
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ajwnpf"}]: dispatch
Feb 02 11:13:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e7 all = 0
Feb 02 11:13:22 compute-0 ceph-mon[74676]: 5.11 scrub starts
Feb 02 11:13:22 compute-0 ceph-mon[74676]: 5.11 scrub ok
Feb 02 11:13:22 compute-0 ceph-mon[74676]: 3.19 scrub starts
Feb 02 11:13:22 compute-0 ceph-mon[74676]: 3.19 scrub ok
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:22 compute-0 ceph-mon[74676]: Creating key for client.nfs.cephfs.0.0.compute-1.jnnwjo
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jnnwjo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jnnwjo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb 02 11:13:22 compute-0 ceph-mon[74676]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jnnwjo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jnnwjo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:13:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:22 compute-0 sudo[96752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pttcncxfwtflwlqznydvweqwshaylezi ; /usr/bin/python3'
Feb 02 11:13:22 compute-0 sudo[96752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:22 compute-0 python3[96754]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:22 compute-0 podman[96755]: 2026-02-02 11:13:22.513500262 +0000 UTC m=+0.042600220 container create 92a201ae04029c80e8886b2cd8510aa4090ac1eeba89fc56c92026f2d26710a2 (image=quay.io/ceph/ceph:v19, name=stupefied_clarke, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:13:22 compute-0 systemd[1]: Started libpod-conmon-92a201ae04029c80e8886b2cd8510aa4090ac1eeba89fc56c92026f2d26710a2.scope.
Feb 02 11:13:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b7caf4ae0ca719e0f60bab2d63ccad8e248016389379229bc9db5dbcded5c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b7caf4ae0ca719e0f60bab2d63ccad8e248016389379229bc9db5dbcded5c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:22 compute-0 podman[96755]: 2026-02-02 11:13:22.49515423 +0000 UTC m=+0.024254208 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:22 compute-0 podman[96755]: 2026-02-02 11:13:22.591831029 +0000 UTC m=+0.120930997 container init 92a201ae04029c80e8886b2cd8510aa4090ac1eeba89fc56c92026f2d26710a2 (image=quay.io/ceph/ceph:v19, name=stupefied_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:13:22 compute-0 ansible-async_wrapper.py[96098]: Done in kid B.
Feb 02 11:13:22 compute-0 podman[96755]: 2026-02-02 11:13:22.598962078 +0000 UTC m=+0.128062036 container start 92a201ae04029c80e8886b2cd8510aa4090ac1eeba89fc56c92026f2d26710a2 (image=quay.io/ceph/ceph:v19, name=stupefied_clarke, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:13:22 compute-0 podman[96755]: 2026-02-02 11:13:22.602794915 +0000 UTC m=+0.131894903 container attach 92a201ae04029c80e8886b2cd8510aa4090ac1eeba89fc56c92026f2d26710a2 (image=quay.io/ceph/ceph:v19, name=stupefied_clarke, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:13:22 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Feb 02 11:13:22 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Feb 02 11:13:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:22 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.vtzbes
Feb 02 11:13:22 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.vtzbes
Feb 02 11:13:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.vtzbes", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.vtzbes", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.vtzbes", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb 02 11:13:22 compute-0 ceph-mgr[74969]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Feb 02 11:13:22 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Feb 02 11:13:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb 02 11:13:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:22 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:22 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.14598 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:13:22 compute-0 stupefied_clarke[96772]: 
Feb 02 11:13:22 compute-0 stupefied_clarke[96772]: [{"container_id": "f1fcd4cff832", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.09%", "created": "2026-02-02T11:10:36.404257Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:13:07.667545Z", "memory_usage": 7778336, "ports": [], "service_name": "crash", "started": "2026-02-02T11:10:36.330674Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@crash.compute-0", "version": "19.2.3"}, {"container_id": "a710ea04d895", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.34%", "created": "2026-02-02T11:11:08.605017Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-02-02T11:13:07.486726Z", "memory_usage": 7817134, "ports": [], "service_name": "crash", "started": "2026-02-02T11:11:08.518938Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@crash.compute-1", "version": "19.2.3"}, {"container_id": "bc438c7d1706", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.24%", "created": "2026-02-02T11:12:00.900495Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-02-02T11:13:07.521195Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2026-02-02T11:12:00.770261Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-0.kwzngg", "daemon_name": "mds.cephfs.compute-0.kwzngg", "daemon_type": "mds", "events": ["2026-02-02T11:13:19.793991Z daemon:mds.cephfs.compute-0.kwzngg [INFO] \"Deployed mds.cephfs.compute-0.kwzngg on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-1.ajwnpf", "daemon_name": "mds.cephfs.compute-1.ajwnpf", "daemon_type": "mds", "events": ["2026-02-02T11:13:21.181543Z daemon:mds.cephfs.compute-1.ajwnpf [INFO] \"Deployed mds.cephfs.compute-1.ajwnpf on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-2.mzpewh", "daemon_name": "mds.cephfs.compute-2.mzpewh", "daemon_type": "mds", "events": ["2026-02-02T11:13:18.424158Z daemon:mds.cephfs.compute-2.mzpewh [INFO] \"Deployed mds.cephfs.compute-2.mzpewh on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "d3aa79f70c71", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "29.16%", "created": "2026-02-02T11:10:01.833262Z", "daemon_id": "compute-0.dhyzzj", "daemon_name": "mgr.compute-0.dhyzzj", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:13:07.667448Z", "memory_usage": 541904076, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-02-02T11:10:01.754407Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@mgr.compute-0.dhyzzj", "version": "19.2.3"}, {"container_id": "fe45da392880", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "36.52%", "created": "2026-02-02T11:11:59.228480Z", "daemon_id": "compute-1.iybsjv", "daemon_name": "mgr.compute-1.iybsjv", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-02-02T11:13:07.487025Z", "memory_usage": 504365056, "ports": [8765], "service_name": "mgr", "started": "2026-02-02T11:11:59.148801Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@mgr.compute-1.iybsjv", "version": "19.2.3"}, {"container_id": "049bef835faf", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "35.84%", "created": "2026-02-02T11:11:53.745286Z", "daemon_id": "compute-2.zebspe", "daemon_name": "mgr.compute-2.zebspe", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-02-02T11:13:07.521121Z", "memory_usage": 503735910, "ports": [8765], "service_name": "mgr", "started": "2026-02-02T11:11:53.657425Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@mgr.compute-2.zebspe", "version": "19.2.3"}, {"container_id": "88d564d338f4", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.95%", "created": "2026-02-02T11:09:58.547168Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:13:07.667296Z", "memory_request": 2147483648, "memory_usage": 62935531, "ports": [], "service_name": "mon", "started": "2026-02-02T11:10:00.317958Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@mon.compute-0", "version": "19.2.3"}, {"container_id": "c1ea1b531377", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.79%", "created": "2026-02-02T11:11:48.703214Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-02-02T11:13:07.486951Z", "memory_request": 2147483648, "memory_usage": 51663339, "ports": [], "service_name": "mon", "started": "2026-02-02T11:11:48.633921Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@mon.compute-1", "version": "19.2.3"}, {"container_id": "79fddc87d4a9", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.59%", "created": "2026-02-02T11:11:47.167094Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-02-02T11:13:07.521007Z", "memory_request": 2147483648, "memory_usage": 48549068, "ports": [], "service_name": "mon", "started": "2026-02-02T11:11:47.069253Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@mon.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.0.0.compute-1.jnnwjo", "daemon_name": "nfs.cephfs.0.0.compute-1.jnnwjo", "daemon_type": "nfs", "events": ["2026-02-02T11:13:22.844769Z daemon:nfs.cephfs.0.0.compute-1.jnnwjo [INFO] \"Deployed nfs.cephfs.0.0.compute-1.jnnwjo on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [12049], "rank": 0, "rank_generation": 0, "service_name": "nfs.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "c94edc7af472", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e", "quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.10%", "created": "2026-02-02T11:12:41.121902Z", "daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:13:07.667890Z", "memory_usage": 5866782, "ports": [9100], "service_name": "node-exporter", "started": "2026-02-02T11:12:41.038853Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@node-exporter.compute-0", "version": "1.7.0"}, {"container_id": "3e8072446f04", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e", "quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.09%", "created": "2026-02-02T11:12:53.723910Z", "daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-02-02T11:13:07.487170Z", "memory_usage": 5879365, "ports": [9100], "service_name": "node-exporter", "started": "2026-02-02T11:12:53.653811Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@node-exporter.compute-1", "version": "1.7.0"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2026-02-02T11:13:12.985805Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "0576ae0b033f", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.66%", "created": "2026-02-02T11:11:18.513524Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:13:07.667684Z", "memory_request": 4294967296, "memory_usage": 69059215, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T11:11:18.421718Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@osd.1", "version": "19.2.3"}, {"container_id": "1b7d631da423", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.22%", "created": "2026-02-02T11:11:18.166319Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-02-02T11:13:07.486876Z", "memory_request": 4294967296, "memory_usage": 78894858, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T11:11:18.091348Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@osd.0", "version": "19.2.3"}, {"container_id": "1242c4b0af3a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "3.07%", "created": "2026-02-02T11:12:20.776075Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-02-02T11:13:07.521266Z", "memory_request": 4294967296, "memory_usage": 67580723, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T11:12:20.696304Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@osd.2", "version": "19.2.3"}, {"container_id": "79f18d781bf3", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.93%", "created": "2026-02-02T11:12:38.056815Z", "daemon_id": "rgw.compute-0.jqfvjy", "daemon_name": "rgw.rgw.compute-0.jqfvjy", "daemon_type": "rgw", "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2026-02-02T11:13:07.667817Z", "memory_usage": 104396226, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-02-02T11:12:37.966827Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@rgw.rgw.compute-0.jqfvjy", "version": "19.2.3"}, {"container_id": "142da841d7c6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.89%", "created": "2026-02-02T11:12:36.560422Z", "daemon_id": "rgw.compute-1.jqjceq", "daemon_name": "rgw.rgw.compute-1.jqjceq", "daemon_type": "rgw", "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "last_refresh": "2026-02-02T11:13:07.487098Z", "memory_usage": 103095992, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-02-02T11:12:36.484565Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@rgw.rgw.c
Feb 02 11:13:22 compute-0 stupefied_clarke[96772]: ompute-1.jqjceq", "version": "19.2.3"}, {"container_id": "fb50ed9bb6c2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.91%", "created": "2026-02-02T11:12:34.592478Z", "daemon_id": "rgw.compute-2.xfsamf", "daemon_name": "rgw.rgw.compute-2.xfsamf", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2026-02-02T11:13:07.521339Z", "memory_usage": 102823362, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-02-02T11:12:34.317075Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1d33f80b-d6ca-501c-bac7-184379b89279@rgw.rgw.compute-2.xfsamf", "version": "19.2.3"}]
Feb 02 11:13:23 compute-0 systemd[1]: libpod-92a201ae04029c80e8886b2cd8510aa4090ac1eeba89fc56c92026f2d26710a2.scope: Deactivated successfully.
Feb 02 11:13:23 compute-0 podman[96755]: 2026-02-02 11:13:23.013289663 +0000 UTC m=+0.542389621 container died 92a201ae04029c80e8886b2cd8510aa4090ac1eeba89fc56c92026f2d26710a2 (image=quay.io/ceph/ceph:v19, name=stupefied_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-68b7caf4ae0ca719e0f60bab2d63ccad8e248016389379229bc9db5dbcded5c7-merged.mount: Deactivated successfully.
Feb 02 11:13:23 compute-0 rsyslogd[1006]: message too long (16383) with configured size 8096, begin of message is: [{"container_id": "f1fcd4cff832", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 02 11:13:23 compute-0 podman[96755]: 2026-02-02 11:13:23.048936068 +0000 UTC m=+0.578036026 container remove 92a201ae04029c80e8886b2cd8510aa4090ac1eeba89fc56c92026f2d26710a2 (image=quay.io/ceph/ceph:v19, name=stupefied_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 11:13:23 compute-0 systemd[1]: libpod-conmon-92a201ae04029c80e8886b2cd8510aa4090ac1eeba89fc56c92026f2d26710a2.scope: Deactivated successfully.
Feb 02 11:13:23 compute-0 sudo[96752]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:23 compute-0 ceph-mon[74676]: Rados config object exists: conf-nfs.cephfs
Feb 02 11:13:23 compute-0 ceph-mon[74676]: Creating key for client.nfs.cephfs.0.0.compute-1.jnnwjo-rgw
Feb 02 11:13:23 compute-0 ceph-mon[74676]: Bind address in nfs.cephfs.0.0.compute-1.jnnwjo's ganesha conf is defaulting to empty
Feb 02 11:13:23 compute-0 ceph-mon[74676]: Deploying daemon nfs.cephfs.0.0.compute-1.jnnwjo on compute-1
Feb 02 11:13:23 compute-0 ceph-mon[74676]: from='client.14577 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:13:23 compute-0 ceph-mon[74676]: pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s
Feb 02 11:13:23 compute-0 ceph-mon[74676]: 5.10 scrub starts
Feb 02 11:13:23 compute-0 ceph-mon[74676]: 5.10 scrub ok
Feb 02 11:13:23 compute-0 ceph-mon[74676]: 3.17 scrub starts
Feb 02 11:13:23 compute-0 ceph-mon[74676]: 3.17 scrub ok
Feb 02 11:13:23 compute-0 ceph-mon[74676]: mds.? [v2:192.168.122.101:6804/2551729492,v1:192.168.122.101:6805/2551729492] up:boot
Feb 02 11:13:23 compute-0 ceph-mon[74676]: fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:active} 2 up:standby
Feb 02 11:13:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ajwnpf"}]: dispatch
Feb 02 11:13:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.vtzbes", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb 02 11:13:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.vtzbes", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb 02 11:13:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb 02 11:13:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb 02 11:13:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:23 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb 02 11:13:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 KiB/s wr, 10 op/s
Feb 02 11:13:23 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb 02 11:13:23 compute-0 sudo[96847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfqhwbjnnvkhtryuuxvngegrmmvxftcv ; /usr/bin/python3'
Feb 02 11:13:23 compute-0 sudo[96847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e8 new map
Feb 02 11:13:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2026-02-02T11:13:23:877474+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T11:13:06.742944+0000
                                           modified        2026-02-02T11:13:22.972818+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24250}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24250 members: 24250
                                           [mds.cephfs.compute-2.mzpewh{0:24250} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kwzngg{-1:14565} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3526018863,v1:192.168.122.100:6807/3526018863] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.ajwnpf{-1:24278} state up:standby seq 1 addr [v2:192.168.122.101:6804/2551729492,v1:192.168.122.101:6805/2551729492] compat {c=[1],r=[1],i=[1fff]}]
Feb 02 11:13:23 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Updating MDS map to version 8 from mon.0
Feb 02 11:13:23 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] up:active
Feb 02 11:13:23 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3526018863,v1:192.168.122.100:6807/3526018863] up:standby
Feb 02 11:13:23 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:active} 2 up:standby
Feb 02 11:13:23 compute-0 python3[96849]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:23 compute-0 podman[96850]: 2026-02-02 11:13:23.971327747 +0000 UTC m=+0.034021341 container create 1d9f05a37c0a747783390793f0b5e16ba02e61c6ae598e939fe44b62b936db39 (image=quay.io/ceph/ceph:v19, name=blissful_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:24 compute-0 systemd[1]: Started libpod-conmon-1d9f05a37c0a747783390793f0b5e16ba02e61c6ae598e939fe44b62b936db39.scope.
Feb 02 11:13:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f333c3c3047fa9cebd80a6e420adc9e9a210dd7a6140d4cdcf915f454a114636/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f333c3c3047fa9cebd80a6e420adc9e9a210dd7a6140d4cdcf915f454a114636/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:24 compute-0 podman[96850]: 2026-02-02 11:13:24.044841819 +0000 UTC m=+0.107535443 container init 1d9f05a37c0a747783390793f0b5e16ba02e61c6ae598e939fe44b62b936db39 (image=quay.io/ceph/ceph:v19, name=blissful_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:13:24 compute-0 podman[96850]: 2026-02-02 11:13:24.04989743 +0000 UTC m=+0.112591024 container start 1d9f05a37c0a747783390793f0b5e16ba02e61c6ae598e939fe44b62b936db39 (image=quay.io/ceph/ceph:v19, name=blissful_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:24 compute-0 podman[96850]: 2026-02-02 11:13:23.956494502 +0000 UTC m=+0.019188116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:24 compute-0 podman[96850]: 2026-02-02 11:13:24.053531901 +0000 UTC m=+0.116225525 container attach 1d9f05a37c0a747783390793f0b5e16ba02e61c6ae598e939fe44b62b936db39 (image=quay.io/ceph/ceph:v19, name=blissful_murdock, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:13:24 compute-0 ceph-mon[74676]: 6.15 scrub starts
Feb 02 11:13:24 compute-0 ceph-mon[74676]: 6.15 scrub ok
Feb 02 11:13:24 compute-0 ceph-mon[74676]: Creating key for client.nfs.cephfs.1.0.compute-2.vtzbes
Feb 02 11:13:24 compute-0 ceph-mon[74676]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Feb 02 11:13:24 compute-0 ceph-mon[74676]: 7.18 scrub starts
Feb 02 11:13:24 compute-0 ceph-mon[74676]: 7.18 scrub ok
Feb 02 11:13:24 compute-0 ceph-mon[74676]: from='client.14598 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 11:13:24 compute-0 ceph-mon[74676]: mds.? [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] up:active
Feb 02 11:13:24 compute-0 ceph-mon[74676]: mds.? [v2:192.168.122.100:6806/3526018863,v1:192.168.122.100:6807/3526018863] up:standby
Feb 02 11:13:24 compute-0 ceph-mon[74676]: fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:active} 2 up:standby
Feb 02 11:13:24 compute-0 blissful_murdock[96865]: 
Feb 02 11:13:24 compute-0 blissful_murdock[96865]: {"fsid":"1d33f80b-d6ca-501c-bac7-184379b89279","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":86,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":52,"num_osds":3,"num_up_osds":3,"osd_up_since":1770030751,"num_in_osds":3,"osd_in_since":1770030722,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":218,"data_bytes":467041,"bytes_used":107548672,"bytes_avail":64304377856,"bytes_total":64411926528,"read_bytes_sec":15098,"write_bytes_sec":1364,"read_op_per_sec":5,"write_op_per_sec":4},"fsmap":{"epoch":8,"btime":"2026-02-02T11:13:23:877474+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.mzpewh","status":"up:active","gid":24250}],"up:standby":2},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2026-02-02T11:12:48.536497+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.dhyzzj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.iybsjv":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.zebspe":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14382":{"start_epoch":4,"start_stamp":"2026-02-02T11:12:47.531533+0000","gid":14382,"addr":"192.168.122.100:0/4235536652","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.jqfvjy","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864296","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"84bebf22-a60c-4c1e-abe8-47242680dd4d","zone_name":"default","zonegroup_id":"294c55f9-f7f9-445d-9954-ab8641436668","zonegroup_name":"default"},"task_status":{}},"24151":{"start_epoch":5,"start_stamp":"2026-02-02T11:12:47.545389+0000","gid":24151,"addr":"192.168.122.102:0/4081522708","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.xfsamf","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864300","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"84bebf22-a60c-4c1e-abe8-47242680dd4d","zone_name":"default","zonegroup_id":"294c55f9-f7f9-445d-9954-ab8641436668","zonegroup_name":"default"},"task_status":{}},"24170":{"start_epoch":4,"start_stamp":"2026-02-02T11:12:47.541797+0000","gid":24170,"addr":"192.168.122.101:0/3032189850","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.jqjceq","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864292","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"84bebf22-a60c-4c1e-abe8-47242680dd4d","zone_name":"default","zonegroup_id":"294c55f9-f7f9-445d-9954-ab8641436668","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"a885bb69-9854-4651-899f-dba599ed3db4":{"message":"Updating nfs.cephfs deployment (+3 -> 3) (1s)\n      [=========...................] (remaining: 3s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Feb 02 11:13:24 compute-0 systemd[1]: libpod-1d9f05a37c0a747783390793f0b5e16ba02e61c6ae598e939fe44b62b936db39.scope: Deactivated successfully.
Feb 02 11:13:24 compute-0 podman[96850]: 2026-02-02 11:13:24.492706951 +0000 UTC m=+0.555400545 container died 1d9f05a37c0a747783390793f0b5e16ba02e61c6ae598e939fe44b62b936db39 (image=quay.io/ceph/ceph:v19, name=blissful_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:13:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f333c3c3047fa9cebd80a6e420adc9e9a210dd7a6140d4cdcf915f454a114636-merged.mount: Deactivated successfully.
Feb 02 11:13:24 compute-0 podman[96850]: 2026-02-02 11:13:24.527393489 +0000 UTC m=+0.590087083 container remove 1d9f05a37c0a747783390793f0b5e16ba02e61c6ae598e939fe44b62b936db39 (image=quay.io/ceph/ceph:v19, name=blissful_murdock, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:13:24 compute-0 systemd[1]: libpod-conmon-1d9f05a37c0a747783390793f0b5e16ba02e61c6ae598e939fe44b62b936db39.scope: Deactivated successfully.
Feb 02 11:13:24 compute-0 sudo[96847]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:24 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Feb 02 11:13:24 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Feb 02 11:13:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e9 new map
Feb 02 11:13:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2026-02-02T11:13:25:220227+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T11:13:06.742944+0000
                                           modified        2026-02-02T11:13:22.972818+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24250}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24250 members: 24250
                                           [mds.cephfs.compute-2.mzpewh{0:24250} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/4029911992,v1:192.168.122.102:6805/4029911992] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kwzngg{-1:14565} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3526018863,v1:192.168.122.100:6807/3526018863] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.ajwnpf{-1:24278} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2551729492,v1:192.168.122.101:6805/2551729492] compat {c=[1],r=[1],i=[1fff]}]
Feb 02 11:13:25 compute-0 sudo[96924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdofvvxqjitxghdhwguttyechyjgfenf ; /usr/bin/python3'
Feb 02 11:13:25 compute-0 sudo[96924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:25 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2551729492,v1:192.168.122.101:6805/2551729492] up:standby
Feb 02 11:13:25 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:active} 2 up:standby
Feb 02 11:13:25 compute-0 ceph-mon[74676]: 6.8 scrub starts
Feb 02 11:13:25 compute-0 ceph-mon[74676]: pgmap v16: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 KiB/s wr, 10 op/s
Feb 02 11:13:25 compute-0 ceph-mon[74676]: 6.8 scrub ok
Feb 02 11:13:25 compute-0 ceph-mon[74676]: 7.1e scrub starts
Feb 02 11:13:25 compute-0 ceph-mon[74676]: 7.1e scrub ok
Feb 02 11:13:25 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1750048645' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb 02 11:13:25 compute-0 python3[96926]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:25 compute-0 podman[96927]: 2026-02-02 11:13:25.437502005 +0000 UTC m=+0.038538017 container create 656a61ca4b6c98d623488dc3b8caacbdb2223161b73b92edf57836432c30fd04 (image=quay.io/ceph/ceph:v19, name=vigorous_black, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:13:25 compute-0 systemd[1]: Started libpod-conmon-656a61ca4b6c98d623488dc3b8caacbdb2223161b73b92edf57836432c30fd04.scope.
Feb 02 11:13:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4386433db0f201e52986761bffc0800940363ae8834ebc67b6fa01d0378e8500/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4386433db0f201e52986761bffc0800940363ae8834ebc67b6fa01d0378e8500/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:25 compute-0 podman[96927]: 2026-02-02 11:13:25.486001569 +0000 UTC m=+0.087037601 container init 656a61ca4b6c98d623488dc3b8caacbdb2223161b73b92edf57836432c30fd04 (image=quay.io/ceph/ceph:v19, name=vigorous_black, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:25 compute-0 podman[96927]: 2026-02-02 11:13:25.491837632 +0000 UTC m=+0.092873644 container start 656a61ca4b6c98d623488dc3b8caacbdb2223161b73b92edf57836432c30fd04 (image=quay.io/ceph/ceph:v19, name=vigorous_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:13:25 compute-0 podman[96927]: 2026-02-02 11:13:25.495041761 +0000 UTC m=+0.096077793 container attach 656a61ca4b6c98d623488dc3b8caacbdb2223161b73b92edf57836432c30fd04 (image=quay.io/ceph/ceph:v19, name=vigorous_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb 02 11:13:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:25 compute-0 podman[96927]: 2026-02-02 11:13:25.419479272 +0000 UTC m=+0.020515314 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 1.3 KiB/s wr, 4 op/s
Feb 02 11:13:25 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Feb 02 11:13:25 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Feb 02 11:13:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 11:13:25 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2580129618' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:13:25 compute-0 vigorous_black[96943]: 
Feb 02 11:13:25 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 15 completed events
Feb 02 11:13:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:13:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:25 compute-0 systemd[1]: libpod-656a61ca4b6c98d623488dc3b8caacbdb2223161b73b92edf57836432c30fd04.scope: Deactivated successfully.
Feb 02 11:13:25 compute-0 vigorous_black[96943]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.dhyzzj/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.iybsjv/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.zebspe/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.jqfvjy","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.jqjceq","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.xfsamf","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Feb 02 11:13:25 compute-0 podman[96927]: 2026-02-02 11:13:25.847449959 +0000 UTC m=+0.448486001 container died 656a61ca4b6c98d623488dc3b8caacbdb2223161b73b92edf57836432c30fd04 (image=quay.io/ceph/ceph:v19, name=vigorous_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:13:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4386433db0f201e52986761bffc0800940363ae8834ebc67b6fa01d0378e8500-merged.mount: Deactivated successfully.
Feb 02 11:13:25 compute-0 podman[96927]: 2026-02-02 11:13:25.878590128 +0000 UTC m=+0.479626140 container remove 656a61ca4b6c98d623488dc3b8caacbdb2223161b73b92edf57836432c30fd04 (image=quay.io/ceph/ceph:v19, name=vigorous_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:13:25 compute-0 systemd[1]: libpod-conmon-656a61ca4b6c98d623488dc3b8caacbdb2223161b73b92edf57836432c30fd04.scope: Deactivated successfully.
Feb 02 11:13:25 compute-0 sudo[96924]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Feb 02 11:13:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb 02 11:13:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb 02 11:13:26 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Feb 02 11:13:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Feb 02 11:13:26 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.vtzbes-rgw
Feb 02 11:13:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.vtzbes-rgw
Feb 02 11:13:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.vtzbes-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb 02 11:13:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.vtzbes-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:13:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.vtzbes-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:13:26 compute-0 ceph-mgr[74969]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.vtzbes's ganesha conf is defaulting to empty
Feb 02 11:13:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.vtzbes's ganesha conf is defaulting to empty
Feb 02 11:13:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:26 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.vtzbes on compute-2
Feb 02 11:13:26 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.vtzbes on compute-2
Feb 02 11:13:26 compute-0 ceph-mon[74676]: 6.7 scrub starts
Feb 02 11:13:26 compute-0 ceph-mon[74676]: 6.7 scrub ok
Feb 02 11:13:26 compute-0 ceph-mon[74676]: 7.6 scrub starts
Feb 02 11:13:26 compute-0 ceph-mon[74676]: 7.6 scrub ok
Feb 02 11:13:26 compute-0 ceph-mon[74676]: mds.? [v2:192.168.122.101:6804/2551729492,v1:192.168.122.101:6805/2551729492] up:standby
Feb 02 11:13:26 compute-0 ceph-mon[74676]: fsmap cephfs:1 {0=cephfs.compute-2.mzpewh=up:active} 2 up:standby
Feb 02 11:13:26 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2580129618' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb 02 11:13:26 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:26 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb 02 11:13:26 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb 02 11:13:26 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.vtzbes-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:13:26 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.vtzbes-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:13:26 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:26 compute-0 sudo[97023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aibqpcyyxwcytmaozrmhxmcglzvissun ; /usr/bin/python3'
Feb 02 11:13:26 compute-0 sudo[97023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:26 compute-0 python3[97025]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:26 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Feb 02 11:13:26 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Feb 02 11:13:26 compute-0 podman[97026]: 2026-02-02 11:13:26.797170729 +0000 UTC m=+0.035035069 container create b4289ae614134b8aa4ba91112e63d401453c9bb398e7c28f8a9f7c262df9ca32 (image=quay.io/ceph/ceph:v19, name=nice_noyce, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:13:26 compute-0 systemd[1]: Started libpod-conmon-b4289ae614134b8aa4ba91112e63d401453c9bb398e7c28f8a9f7c262df9ca32.scope.
Feb 02 11:13:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c476ae8b1001479cbecd3aa23fd099fabf3ebf68f51bb0c07ce30f300c7abf3d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c476ae8b1001479cbecd3aa23fd099fabf3ebf68f51bb0c07ce30f300c7abf3d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:26 compute-0 podman[97026]: 2026-02-02 11:13:26.860055005 +0000 UTC m=+0.097919375 container init b4289ae614134b8aa4ba91112e63d401453c9bb398e7c28f8a9f7c262df9ca32 (image=quay.io/ceph/ceph:v19, name=nice_noyce, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:26 compute-0 podman[97026]: 2026-02-02 11:13:26.866663909 +0000 UTC m=+0.104528249 container start b4289ae614134b8aa4ba91112e63d401453c9bb398e7c28f8a9f7c262df9ca32 (image=quay.io/ceph/ceph:v19, name=nice_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:13:26 compute-0 podman[97026]: 2026-02-02 11:13:26.870211298 +0000 UTC m=+0.108075658 container attach b4289ae614134b8aa4ba91112e63d401453c9bb398e7c28f8a9f7c262df9ca32 (image=quay.io/ceph/ceph:v19, name=nice_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:13:26 compute-0 podman[97026]: 2026-02-02 11:13:26.782942132 +0000 UTC m=+0.020806502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23501733' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Feb 02 11:13:27 compute-0 nice_noyce[97041]: mimic
Feb 02 11:13:27 compute-0 systemd[1]: libpod-b4289ae614134b8aa4ba91112e63d401453c9bb398e7c28f8a9f7c262df9ca32.scope: Deactivated successfully.
Feb 02 11:13:27 compute-0 podman[97026]: 2026-02-02 11:13:27.237599084 +0000 UTC m=+0.475463424 container died b4289ae614134b8aa4ba91112e63d401453c9bb398e7c28f8a9f7c262df9ca32 (image=quay.io/ceph/ceph:v19, name=nice_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:13:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c476ae8b1001479cbecd3aa23fd099fabf3ebf68f51bb0c07ce30f300c7abf3d-merged.mount: Deactivated successfully.
Feb 02 11:13:27 compute-0 ceph-mon[74676]: pgmap v17: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 1.3 KiB/s wr, 4 op/s
Feb 02 11:13:27 compute-0 ceph-mon[74676]: 6.5 scrub starts
Feb 02 11:13:27 compute-0 ceph-mon[74676]: 6.5 scrub ok
Feb 02 11:13:27 compute-0 ceph-mon[74676]: 7.3 scrub starts
Feb 02 11:13:27 compute-0 ceph-mon[74676]: 7.3 scrub ok
Feb 02 11:13:27 compute-0 ceph-mon[74676]: Rados config object exists: conf-nfs.cephfs
Feb 02 11:13:27 compute-0 ceph-mon[74676]: Creating key for client.nfs.cephfs.1.0.compute-2.vtzbes-rgw
Feb 02 11:13:27 compute-0 ceph-mon[74676]: Bind address in nfs.cephfs.1.0.compute-2.vtzbes's ganesha conf is defaulting to empty
Feb 02 11:13:27 compute-0 ceph-mon[74676]: Deploying daemon nfs.cephfs.1.0.compute-2.vtzbes on compute-2
Feb 02 11:13:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/23501733' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Feb 02 11:13:27 compute-0 podman[97026]: 2026-02-02 11:13:27.277915759 +0000 UTC m=+0.515780109 container remove b4289ae614134b8aa4ba91112e63d401453c9bb398e7c28f8a9f7c262df9ca32 (image=quay.io/ceph/ceph:v19, name=nice_noyce, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:13:27 compute-0 systemd[1]: libpod-conmon-b4289ae614134b8aa4ba91112e63d401453c9bb398e7c28f8a9f7c262df9ca32.scope: Deactivated successfully.
Feb 02 11:13:27 compute-0 sudo[97023]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.lrvhze
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.lrvhze
Feb 02 11:13:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.lrvhze", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.lrvhze", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.lrvhze", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Feb 02 11:13:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb 02 11:13:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.8 KiB/s wr, 5 op/s
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.lrvhze-rgw
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.lrvhze-rgw
Feb 02 11:13:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.lrvhze-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.lrvhze-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.lrvhze-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.lrvhze's ganesha conf is defaulting to empty
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.lrvhze's ganesha conf is defaulting to empty
Feb 02 11:13:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:13:27 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:27 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.lrvhze on compute-0
Feb 02 11:13:27 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.lrvhze on compute-0
Feb 02 11:13:27 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Feb 02 11:13:27 compute-0 sudo[97114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:27 compute-0 sudo[97114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:27 compute-0 sudo[97114]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:27 compute-0 sudo[97139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:13:27 compute-0 sudo[97139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:28 compute-0 sudo[97215]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhthlmbgsntlbncrspozklawdclkdzmu ; /usr/bin/python3'
Feb 02 11:13:28 compute-0 sudo[97215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:13:28 compute-0 podman[97231]: 2026-02-02 11:13:28.261422914 +0000 UTC m=+0.042674362 container create 0d94f12f35380e9570c0002c5ec0e744aeaf5da0ab78625b385b8b1d2e279203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:28 compute-0 systemd[1]: Started libpod-conmon-0d94f12f35380e9570c0002c5ec0e744aeaf5da0ab78625b385b8b1d2e279203.scope.
Feb 02 11:13:28 compute-0 python3[97219]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:13:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:28 compute-0 podman[97231]: 2026-02-02 11:13:28.332270082 +0000 UTC m=+0.113521540 container init 0d94f12f35380e9570c0002c5ec0e744aeaf5da0ab78625b385b8b1d2e279203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_elbakyan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:13:28 compute-0 podman[97231]: 2026-02-02 11:13:28.336249213 +0000 UTC m=+0.117500661 container start 0d94f12f35380e9570c0002c5ec0e744aeaf5da0ab78625b385b8b1d2e279203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_elbakyan, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:13:28 compute-0 podman[97231]: 2026-02-02 11:13:28.243533625 +0000 UTC m=+0.024785073 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:13:28 compute-0 podman[97231]: 2026-02-02 11:13:28.3393798 +0000 UTC m=+0.120631248 container attach 0d94f12f35380e9570c0002c5ec0e744aeaf5da0ab78625b385b8b1d2e279203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Feb 02 11:13:28 compute-0 distracted_elbakyan[97248]: 167 167
Feb 02 11:13:28 compute-0 systemd[1]: libpod-0d94f12f35380e9570c0002c5ec0e744aeaf5da0ab78625b385b8b1d2e279203.scope: Deactivated successfully.
Feb 02 11:13:28 compute-0 podman[97231]: 2026-02-02 11:13:28.342055015 +0000 UTC m=+0.123306473 container died 0d94f12f35380e9570c0002c5ec0e744aeaf5da0ab78625b385b8b1d2e279203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:13:28 compute-0 podman[97251]: 2026-02-02 11:13:28.368717339 +0000 UTC m=+0.046132659 container create a22002ae41a4476c41e530d95e7c46bd8228fe2b515eed77f590aa16d07ab1c4 (image=quay.io/ceph/ceph:v19, name=interesting_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:13:28 compute-0 podman[97231]: 2026-02-02 11:13:28.401469213 +0000 UTC m=+0.182720661 container remove 0d94f12f35380e9570c0002c5ec0e744aeaf5da0ab78625b385b8b1d2e279203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:28 compute-0 systemd[1]: Started libpod-conmon-a22002ae41a4476c41e530d95e7c46bd8228fe2b515eed77f590aa16d07ab1c4.scope.
Feb 02 11:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8a54752910b6ff06f91c012f60b065905654d07e003e91fc19227b17db5d72b-merged.mount: Deactivated successfully.
Feb 02 11:13:28 compute-0 systemd[1]: libpod-conmon-0d94f12f35380e9570c0002c5ec0e744aeaf5da0ab78625b385b8b1d2e279203.scope: Deactivated successfully.
Feb 02 11:13:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/076842c43d1d49927dc699b46015b90437377b6e659254d0c5fc4aac00335062/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/076842c43d1d49927dc699b46015b90437377b6e659254d0c5fc4aac00335062/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:28 compute-0 podman[97251]: 2026-02-02 11:13:28.442336924 +0000 UTC m=+0.119752254 container init a22002ae41a4476c41e530d95e7c46bd8228fe2b515eed77f590aa16d07ab1c4 (image=quay.io/ceph/ceph:v19, name=interesting_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:13:28 compute-0 podman[97251]: 2026-02-02 11:13:28.349530543 +0000 UTC m=+0.026945873 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:13:28 compute-0 systemd[1]: Reloading.
Feb 02 11:13:28 compute-0 podman[97251]: 2026-02-02 11:13:28.448112275 +0000 UTC m=+0.125527595 container start a22002ae41a4476c41e530d95e7c46bd8228fe2b515eed77f590aa16d07ab1c4 (image=quay.io/ceph/ceph:v19, name=interesting_neumann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:13:28 compute-0 podman[97251]: 2026-02-02 11:13:28.452090476 +0000 UTC m=+0.129505806 container attach a22002ae41a4476c41e530d95e7c46bd8228fe2b515eed77f590aa16d07ab1c4 (image=quay.io/ceph/ceph:v19, name=interesting_neumann, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb 02 11:13:28 compute-0 systemd-rc-local-generator[97308]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:13:28 compute-0 systemd-sysv-generator[97313]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:13:28 compute-0 ceph-mon[74676]: 6.2 scrub starts
Feb 02 11:13:28 compute-0 ceph-mon[74676]: 6.2 scrub ok
Feb 02 11:13:28 compute-0 ceph-mon[74676]: 7.2 scrub starts
Feb 02 11:13:28 compute-0 ceph-mon[74676]: 7.2 scrub ok
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:28 compute-0 ceph-mon[74676]: Creating key for client.nfs.cephfs.2.0.compute-0.lrvhze
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.lrvhze", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.lrvhze", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb 02 11:13:28 compute-0 ceph-mon[74676]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb 02 11:13:28 compute-0 ceph-mon[74676]: pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.8 KiB/s wr, 5 op/s
Feb 02 11:13:28 compute-0 ceph-mon[74676]: Rados config object exists: conf-nfs.cephfs
Feb 02 11:13:28 compute-0 ceph-mon[74676]: Creating key for client.nfs.cephfs.2.0.compute-0.lrvhze-rgw
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.lrvhze-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.lrvhze-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 11:13:28 compute-0 ceph-mon[74676]: Bind address in nfs.cephfs.2.0.compute-0.lrvhze's ganesha conf is defaulting to empty
Feb 02 11:13:28 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:13:28 compute-0 ceph-mon[74676]: Deploying daemon nfs.cephfs.2.0.compute-0.lrvhze on compute-0
Feb 02 11:13:28 compute-0 systemd[1]: Reloading.
Feb 02 11:13:28 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.d scrub starts
Feb 02 11:13:28 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.d scrub ok
Feb 02 11:13:28 compute-0 systemd-rc-local-generator[97365]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:13:28 compute-0 systemd-sysv-generator[97369]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:13:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Feb 02 11:13:28 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508550933' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Feb 02 11:13:28 compute-0 interesting_neumann[97280]: 
Feb 02 11:13:28 compute-0 interesting_neumann[97280]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Feb 02 11:13:28 compute-0 podman[97251]: 2026-02-02 11:13:28.906987975 +0000 UTC m=+0.584403335 container died a22002ae41a4476c41e530d95e7c46bd8228fe2b515eed77f590aa16d07ab1c4 (image=quay.io/ceph/ceph:v19, name=interesting_neumann, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:13:28 compute-0 systemd[1]: libpod-a22002ae41a4476c41e530d95e7c46bd8228fe2b515eed77f590aa16d07ab1c4.scope: Deactivated successfully.
Feb 02 11:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-076842c43d1d49927dc699b46015b90437377b6e659254d0c5fc4aac00335062-merged.mount: Deactivated successfully.
Feb 02 11:13:28 compute-0 podman[97251]: 2026-02-02 11:13:28.951635941 +0000 UTC m=+0.629051261 container remove a22002ae41a4476c41e530d95e7c46bd8228fe2b515eed77f590aa16d07ab1c4 (image=quay.io/ceph/ceph:v19, name=interesting_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:13:28 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:13:28 compute-0 systemd[1]: libpod-conmon-a22002ae41a4476c41e530d95e7c46bd8228fe2b515eed77f590aa16d07ab1c4.scope: Deactivated successfully.
Feb 02 11:13:28 compute-0 sudo[97215]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:29 compute-0 podman[97441]: 2026-02-02 11:13:29.155478562 +0000 UTC m=+0.038575208 container create e500e59ed7f42822359ba0c862b1acc351602eace05141d5d441715f86e1c3c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:13:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c3758ee9432ef264441dc702e2e1b3201c32a6081e8f679a63273139069b68/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c3758ee9432ef264441dc702e2e1b3201c32a6081e8f679a63273139069b68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c3758ee9432ef264441dc702e2e1b3201c32a6081e8f679a63273139069b68/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c3758ee9432ef264441dc702e2e1b3201c32a6081e8f679a63273139069b68/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.lrvhze-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:29 compute-0 podman[97441]: 2026-02-02 11:13:29.201462415 +0000 UTC m=+0.084559071 container init e500e59ed7f42822359ba0c862b1acc351602eace05141d5d441715f86e1c3c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:13:29 compute-0 podman[97441]: 2026-02-02 11:13:29.208511322 +0000 UTC m=+0.091607978 container start e500e59ed7f42822359ba0c862b1acc351602eace05141d5d441715f86e1c3c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:13:29 compute-0 bash[97441]: e500e59ed7f42822359ba0c862b1acc351602eace05141d5d441715f86e1c3c9
Feb 02 11:13:29 compute-0 podman[97441]: 2026-02-02 11:13:29.136024088 +0000 UTC m=+0.019120774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:13:29 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb 02 11:13:29 compute-0 sudo[97139]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb 02 11:13:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb 02 11:13:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:13:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:13:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:29 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev a885bb69-9854-4651-899f-dba599ed3db4 (Updating nfs.cephfs deployment (+3 -> 3))
Feb 02 11:13:29 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event a885bb69-9854-4651-899f-dba599ed3db4 (Updating nfs.cephfs deployment (+3 -> 3)) in 8 seconds
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:13:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:13:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:29 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 69829af6-32f3-4ff2-b1ef-93b91b54b8ca (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Feb 02 11:13:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:13:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb 02 11:13:29 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.qhansn on compute-1
Feb 02 11:13:29 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.qhansn on compute-1
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb 02 11:13:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:29 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:13:29 compute-0 ceph-mon[74676]: 6.3 scrub starts
Feb 02 11:13:29 compute-0 ceph-mon[74676]: 6.3 scrub ok
Feb 02 11:13:29 compute-0 ceph-mon[74676]: 7.4 scrub starts
Feb 02 11:13:29 compute-0 ceph-mon[74676]: 7.4 scrub ok
Feb 02 11:13:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3508550933' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Feb 02 11:13:29 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:29 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:29 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:29 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:29 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.8 KiB/s wr, 5 op/s
Feb 02 11:13:29 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.a scrub starts
Feb 02 11:13:29 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.a scrub ok
Feb 02 11:13:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:30 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb 02 11:13:30 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb 02 11:13:30 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 16 completed events
Feb 02 11:13:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:13:30 compute-0 ceph-mon[74676]: 6.d scrub starts
Feb 02 11:13:30 compute-0 ceph-mon[74676]: 6.d scrub ok
Feb 02 11:13:30 compute-0 ceph-mon[74676]: 7.e deep-scrub starts
Feb 02 11:13:30 compute-0 ceph-mon[74676]: 7.e deep-scrub ok
Feb 02 11:13:30 compute-0 ceph-mon[74676]: Deploying daemon haproxy.nfs.cephfs.compute-1.qhansn on compute-1
Feb 02 11:13:30 compute-0 ceph-mon[74676]: pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.8 KiB/s wr, 5 op/s
Feb 02 11:13:30 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:31 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Feb 02 11:13:31 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Feb 02 11:13:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 3.6 KiB/s wr, 12 op/s
Feb 02 11:13:32 compute-0 ceph-mon[74676]: 6.a scrub starts
Feb 02 11:13:32 compute-0 ceph-mon[74676]: 6.a scrub ok
Feb 02 11:13:32 compute-0 ceph-mon[74676]: 7.f scrub starts
Feb 02 11:13:32 compute-0 ceph-mon[74676]: 7.f scrub ok
Feb 02 11:13:32 compute-0 ceph-mon[74676]: 6.e scrub starts
Feb 02 11:13:32 compute-0 ceph-mon[74676]: 6.e scrub ok
Feb 02 11:13:32 compute-0 ceph-mon[74676]: 7.8 scrub starts
Feb 02 11:13:32 compute-0 ceph-mon[74676]: 7.8 scrub ok
Feb 02 11:13:32 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:32 compute-0 ceph-mon[74676]: 6.1a scrub starts
Feb 02 11:13:32 compute-0 ceph-mon[74676]: 6.1a scrub ok
Feb 02 11:13:33 compute-0 ceph-mon[74676]: pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 3.6 KiB/s wr, 12 op/s
Feb 02 11:13:33 compute-0 ceph-mon[74676]: 7.9 deep-scrub starts
Feb 02 11:13:33 compute-0 ceph-mon[74676]: 7.9 deep-scrub ok
Feb 02 11:13:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:13:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:13:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb 02 11:13:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:33 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.wzpgfa on compute-0
Feb 02 11:13:33 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.wzpgfa on compute-0
Feb 02 11:13:33 compute-0 sudo[97511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:33 compute-0 sudo[97511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:33 compute-0 sudo[97511]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:33 compute-0 sudo[97536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:13:33 compute-0 sudo[97536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 3.6 KiB/s wr, 12 op/s
Feb 02 11:13:34 compute-0 ceph-mon[74676]: 7.b scrub starts
Feb 02 11:13:34 compute-0 ceph-mon[74676]: 7.b scrub ok
Feb 02 11:13:34 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:34 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:34 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:34 compute-0 ceph-mon[74676]: Deploying daemon haproxy.nfs.cephfs.compute-0.wzpgfa on compute-0
Feb 02 11:13:34 compute-0 ceph-mon[74676]: pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 3.6 KiB/s wr, 12 op/s
Feb 02 11:13:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:34 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd718000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:35 compute-0 ceph-mon[74676]: 7.10 deep-scrub starts
Feb 02 11:13:35 compute-0 ceph-mon[74676]: 7.10 deep-scrub ok
Feb 02 11:13:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.2 KiB/s wr, 8 op/s
Feb 02 11:13:35 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:13:35 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:13:35 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:13:35 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:13:35 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:13:35 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:13:36 compute-0 podman[97606]: 2026-02-02 11:13:36.22581576 +0000 UTC m=+2.272257581 container create 4f98cd67024343c863340f0d74423e0f39175ef2acc96497b21f7b12dbe2249c (image=quay.io/ceph/haproxy:2.3, name=adoring_thompson)
Feb 02 11:13:36 compute-0 systemd[1]: Started libpod-conmon-4f98cd67024343c863340f0d74423e0f39175ef2acc96497b21f7b12dbe2249c.scope.
Feb 02 11:13:36 compute-0 podman[97606]: 2026-02-02 11:13:36.210649477 +0000 UTC m=+2.257091318 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Feb 02 11:13:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:36 compute-0 podman[97606]: 2026-02-02 11:13:36.300128175 +0000 UTC m=+2.346570016 container init 4f98cd67024343c863340f0d74423e0f39175ef2acc96497b21f7b12dbe2249c (image=quay.io/ceph/haproxy:2.3, name=adoring_thompson)
Feb 02 11:13:36 compute-0 podman[97606]: 2026-02-02 11:13:36.307389488 +0000 UTC m=+2.353831309 container start 4f98cd67024343c863340f0d74423e0f39175ef2acc96497b21f7b12dbe2249c (image=quay.io/ceph/haproxy:2.3, name=adoring_thompson)
Feb 02 11:13:36 compute-0 podman[97606]: 2026-02-02 11:13:36.31106124 +0000 UTC m=+2.357503091 container attach 4f98cd67024343c863340f0d74423e0f39175ef2acc96497b21f7b12dbe2249c (image=quay.io/ceph/haproxy:2.3, name=adoring_thompson)
Feb 02 11:13:36 compute-0 adoring_thompson[97723]: 0 0
Feb 02 11:13:36 compute-0 systemd[1]: libpod-4f98cd67024343c863340f0d74423e0f39175ef2acc96497b21f7b12dbe2249c.scope: Deactivated successfully.
Feb 02 11:13:36 compute-0 podman[97606]: 2026-02-02 11:13:36.315393531 +0000 UTC m=+2.361835362 container died 4f98cd67024343c863340f0d74423e0f39175ef2acc96497b21f7b12dbe2249c (image=quay.io/ceph/haproxy:2.3, name=adoring_thompson)
Feb 02 11:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e55c9d62e0a1db24ab160412a2ae69a01ca159c8f1fa68b668d543e93adaa22a-merged.mount: Deactivated successfully.
Feb 02 11:13:36 compute-0 podman[97606]: 2026-02-02 11:13:36.349265136 +0000 UTC m=+2.395706957 container remove 4f98cd67024343c863340f0d74423e0f39175ef2acc96497b21f7b12dbe2249c (image=quay.io/ceph/haproxy:2.3, name=adoring_thompson)
Feb 02 11:13:36 compute-0 systemd[1]: libpod-conmon-4f98cd67024343c863340f0d74423e0f39175ef2acc96497b21f7b12dbe2249c.scope: Deactivated successfully.
Feb 02 11:13:36 compute-0 systemd[1]: Reloading.
Feb 02 11:13:36 compute-0 systemd-rc-local-generator[97771]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:13:36 compute-0 systemd-sysv-generator[97774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:13:36 compute-0 ceph-mon[74676]: 7.13 scrub starts
Feb 02 11:13:36 compute-0 ceph-mon[74676]: 7.13 scrub ok
Feb 02 11:13:36 compute-0 ceph-mon[74676]: pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.2 KiB/s wr, 8 op/s
Feb 02 11:13:36 compute-0 systemd[1]: Reloading.
Feb 02 11:13:36 compute-0 systemd-sysv-generator[97815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:13:36 compute-0 systemd-rc-local-generator[97812]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:13:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:36 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd710001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:36 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.wzpgfa for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:13:37 compute-0 podman[97868]: 2026-02-02 11:13:37.068107973 +0000 UTC m=+0.038272849 container create 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6b29bea1a141bb7eb99bf5aaa2729bba88f42505be86adca94e7b2522322c0/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:37 compute-0 podman[97868]: 2026-02-02 11:13:37.118230972 +0000 UTC m=+0.088395868 container init 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:13:37 compute-0 podman[97868]: 2026-02-02 11:13:37.121797222 +0000 UTC m=+0.091962098 container start 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:13:37 compute-0 bash[97868]: 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed
Feb 02 11:13:37 compute-0 podman[97868]: 2026-02-02 11:13:37.052514058 +0000 UTC m=+0.022678974 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Feb 02 11:13:37 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.wzpgfa for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:13:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [NOTICE] 032/111337 (2) : New worker #1 (4) forked
Feb 02 11:13:37 compute-0 sudo[97536]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:13:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:13:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb 02 11:13:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:37 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.dhhtqe on compute-2
Feb 02 11:13:37 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.dhhtqe on compute-2
Feb 02 11:13:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.2 KiB/s wr, 8 op/s
Feb 02 11:13:38 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:38 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:38 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:38 compute-0 ceph-mon[74676]: Deploying daemon haproxy.nfs.cephfs.compute-2.dhhtqe on compute-2
Feb 02 11:13:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:38 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f8000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:38 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f0000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:39 compute-0 ceph-mon[74676]: pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.2 KiB/s wr, 8 op/s
Feb 02 11:13:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Feb 02 11:13:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:40 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd714001230 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:40 compute-0 ceph-mon[74676]: pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Feb 02 11:13:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:40 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd710001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Feb 02 11:13:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:13:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:13:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb 02 11:13:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Feb 02 11:13:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:42 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb 02 11:13:42 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb 02 11:13:42 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:13:42 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:13:42 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:13:42 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:13:42 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.uqcwnt on compute-1
Feb 02 11:13:42 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.uqcwnt on compute-1
Feb 02 11:13:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:42 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:42 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:42 compute-0 ceph-mon[74676]: pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Feb 02 11:13:42 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:42 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:42 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:42 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:42 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb 02 11:13:42 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:13:42 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:13:42 compute-0 ceph-mon[74676]: Deploying daemon keepalived.nfs.cephfs.compute-1.uqcwnt on compute-1
Feb 02 11:13:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:43 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd714001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:13:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:44 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd710001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:44 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:45 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:45 compute-0 ceph-mon[74676]: pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:13:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:13:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:46 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd714001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:46 compute-0 ceph-mon[74676]: pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:13:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:46 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd710001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:13:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:13:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb 02 11:13:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:47 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:13:47 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:13:47 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb 02 11:13:47 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb 02 11:13:47 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:13:47 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:13:47 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.pstbyv on compute-0
Feb 02 11:13:47 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.pstbyv on compute-0
Feb 02 11:13:47 compute-0 sudo[97899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:47 compute-0 sudo[97899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:47 compute-0 sudo[97899]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:47 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:47 compute-0 sudo[97924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:13:47 compute-0 sudo[97924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:13:47 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:47 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:47 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:47 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:13:47 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb 02 11:13:47 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:13:47 compute-0 ceph-mon[74676]: Deploying daemon keepalived.nfs.cephfs.compute-0.pstbyv on compute-0
Feb 02 11:13:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:48 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:48 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd714001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:49 compute-0 ceph-mon[74676]: pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:13:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:49 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd710001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:13:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:50 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:50 compute-0 podman[97990]: 2026-02-02 11:13:50.484813019 +0000 UTC m=+2.978793686 container create cebf92dd3a09c9df6261ba668a6b0d9b0749108471d633e35405bd255bf4c757 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hoover, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, description=keepalived for Ceph, io.buildah.version=1.28.2, version=2.2.4, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container)
Feb 02 11:13:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:50 compute-0 systemd[1]: Started libpod-conmon-cebf92dd3a09c9df6261ba668a6b0d9b0749108471d633e35405bd255bf4c757.scope.
Feb 02 11:13:50 compute-0 podman[97990]: 2026-02-02 11:13:50.463976303 +0000 UTC m=+2.957956990 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Feb 02 11:13:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:13:50 compute-0 podman[97990]: 2026-02-02 11:13:50.561282956 +0000 UTC m=+3.055263643 container init cebf92dd3a09c9df6261ba668a6b0d9b0749108471d633e35405bd255bf4c757 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hoover, distribution-scope=public, release=1793, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=keepalived, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9)
Feb 02 11:13:50 compute-0 podman[97990]: 2026-02-02 11:13:50.570602938 +0000 UTC m=+3.064583605 container start cebf92dd3a09c9df6261ba668a6b0d9b0749108471d633e35405bd255bf4c757 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hoover, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.buildah.version=1.28.2, description=keepalived for Ceph, architecture=x86_64, name=keepalived, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9)
Feb 02 11:13:50 compute-0 podman[97990]: 2026-02-02 11:13:50.57424775 +0000 UTC m=+3.068228437 container attach cebf92dd3a09c9df6261ba668a6b0d9b0749108471d633e35405bd255bf4c757 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hoover, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, description=keepalived for Ceph, release=1793, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Feb 02 11:13:50 compute-0 upbeat_hoover[98086]: 0 0
Feb 02 11:13:50 compute-0 systemd[1]: libpod-cebf92dd3a09c9df6261ba668a6b0d9b0749108471d633e35405bd255bf4c757.scope: Deactivated successfully.
Feb 02 11:13:50 compute-0 podman[97990]: 2026-02-02 11:13:50.589491088 +0000 UTC m=+3.083471795 container died cebf92dd3a09c9df6261ba668a6b0d9b0749108471d633e35405bd255bf4c757 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hoover, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, architecture=x86_64, io.openshift.expose-services=, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, vendor=Red Hat, Inc., vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 02 11:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1052ee6d15be7f4d366bbdaa36a7bdf0a9588aaecaea6fb3090416bc6c97acd8-merged.mount: Deactivated successfully.
Feb 02 11:13:50 compute-0 podman[97990]: 2026-02-02 11:13:50.625921991 +0000 UTC m=+3.119902668 container remove cebf92dd3a09c9df6261ba668a6b0d9b0749108471d633e35405bd255bf4c757 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hoover, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-type=git, io.buildah.version=1.28.2, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public)
Feb 02 11:13:50 compute-0 systemd[1]: libpod-conmon-cebf92dd3a09c9df6261ba668a6b0d9b0749108471d633e35405bd255bf4c757.scope: Deactivated successfully.
Feb 02 11:13:50 compute-0 systemd[1]: Reloading.
Feb 02 11:13:50 compute-0 systemd-rc-local-generator[98129]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:13:50 compute-0 systemd-sysv-generator[98137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:13:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:50 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f0002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:50 compute-0 systemd[1]: Reloading.
Feb 02 11:13:51 compute-0 systemd-rc-local-generator[98166]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:13:51 compute-0 systemd-sysv-generator[98169]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:13:51 compute-0 ceph-mon[74676]: pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:51 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd7140031e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111351 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:13:51 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.pstbyv for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:13:51 compute-0 podman[98232]: 2026-02-02 11:13:51.413075625 +0000 UTC m=+0.043218795 container create 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, release=1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 02 11:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31148f09bbbcffe3bbdb4ebb29d75d4f4f7b56d00e864736310419a3b8db4c02/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:13:51 compute-0 podman[98232]: 2026-02-02 11:13:51.464085807 +0000 UTC m=+0.094228987 container init 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, architecture=x86_64, version=2.2.4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, release=1793)
Feb 02 11:13:51 compute-0 podman[98232]: 2026-02-02 11:13:51.46809269 +0000 UTC m=+0.098235850 container start 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, com.redhat.component=keepalived-container, distribution-scope=public, release=1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2)
Feb 02 11:13:51 compute-0 bash[98232]: 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25
Feb 02 11:13:51 compute-0 podman[98232]: 2026-02-02 11:13:51.394624086 +0000 UTC m=+0.024767266 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Feb 02 11:13:51 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.pstbyv for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:13:51 2026: Starting Keepalived v2.2.4 (08/21,2021)
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:13:51 2026: Running on Linux 5.14.0-665.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026 (built for Linux 5.14.0)
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:13:51 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:13:51 2026: Configuration file /etc/keepalived/keepalived.conf
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:13:51 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:13:51 2026: Starting VRRP child process, pid=4
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:13:51 2026: Startup complete
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:13:51 2026: (VI_0) Entering BACKUP STATE (init)
Feb 02 11:13:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:13:51 2026: VRRP_Script(check_backend) succeeded
Feb 02 11:13:51 compute-0 sudo[97924]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:13:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:13:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb 02 11:13:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:51 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:13:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:13:51 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:13:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:13:51 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb 02 11:13:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb 02 11:13:51 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.uykkul on compute-2
Feb 02 11:13:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.uykkul on compute-2
Feb 02 11:13:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:13:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:52 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd710001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:52 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:52 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:52 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:52 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:13:52 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:13:52 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb 02 11:13:52 compute-0 ceph-mon[74676]: Deploying daemon keepalived.nfs.cephfs.compute-2.uykkul on compute-2
Feb 02 11:13:52 compute-0 ceph-mon[74676]: pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:13:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:52 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:53 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd6f0002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:13:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:13:54 compute-0 kernel: ganesha.nfsd[97503]: segfault at 50 ip 00007fd7a3c7f32e sp 00007fd7297f9210 error 4 in libntirpc.so.5.8[7fd7a3c64000+2c000] likely on CPU 4 (core 0, socket 4)
Feb 02 11:13:54 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb 02 11:13:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[97456]: 02/02/2026 11:13:54 : epoch 698086d9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd7140031e0 fd 37 proxy ignored for local
Feb 02 11:13:54 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Feb 02 11:13:54 compute-0 systemd[1]: Started Process Core Dump (PID 98256/UID 0).
Feb 02 11:13:54 compute-0 ceph-mon[74676]: pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:13:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:13:55 2026: (VI_0) Entering MASTER STATE
Feb 02 11:13:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:13:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:13:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:13:57 compute-0 ceph-mon[74676]: pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:13:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:57 compute-0 systemd-coredump[98257]: Process 97460 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 45:
                                                   #0  0x00007fd7a3c7f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Feb 02 11:13:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:13:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb 02 11:13:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:57 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 69829af6-32f3-4ff2-b1ef-93b91b54b8ca (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Feb 02 11:13:57 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 69829af6-32f3-4ff2-b1ef-93b91b54b8ca (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 28 seconds
Feb 02 11:13:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb 02 11:13:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:57 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 31b89e85-8621-4f70-af4b-3cb895af1b94 (Updating alertmanager deployment (+1 -> 1))
Feb 02 11:13:57 compute-0 systemd[1]: systemd-coredump@0-98256-0.service: Deactivated successfully.
Feb 02 11:13:57 compute-0 systemd[1]: systemd-coredump@0-98256-0.service: Consumed 1.993s CPU time.
Feb 02 11:13:57 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Feb 02 11:13:57 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Feb 02 11:13:57 compute-0 podman[98263]: 2026-02-02 11:13:57.502766192 +0000 UTC m=+0.032587646 container died e500e59ed7f42822359ba0c862b1acc351602eace05141d5d441715f86e1c3c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:13:57 compute-0 sudo[98269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:13:57 compute-0 sudo[98269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:57 compute-0 sudo[98269]: pam_unix(sudo:session): session closed for user root
Feb 02 11:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6c3758ee9432ef264441dc702e2e1b3201c32a6081e8f679a63273139069b68-merged.mount: Deactivated successfully.
Feb 02 11:13:57 compute-0 podman[98263]: 2026-02-02 11:13:57.578756946 +0000 UTC m=+0.108578310 container remove e500e59ed7f42822359ba0c862b1acc351602eace05141d5d441715f86e1c3c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:13:57 compute-0 sudo[98303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:13:57 compute-0 sudo[98303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:13:57 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Main process exited, code=exited, status=139/n/a
Feb 02 11:13:57 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Failed with result 'exit-code'.
Feb 02 11:13:57 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.762s CPU time.
Feb 02 11:13:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:13:58 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:58 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:58 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:58 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:13:58 compute-0 ceph-mon[74676]: Deploying daemon alertmanager.compute-0 on compute-0
Feb 02 11:13:58 compute-0 ceph-mon[74676]: pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:13:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:13:59 compute-0 podman[98398]: 2026-02-02 11:13:59.999234963 +0000 UTC m=+2.100167904 volume create 1ec1102c04fe52f528772b274e7824308c831c62f4beaf37d3c24d85a503cdbf
Feb 02 11:14:00 compute-0 podman[98398]: 2026-02-02 11:14:00.008882464 +0000 UTC m=+2.109815385 container create 50c713ca37665d6dd6ad4103811d22d6e85957d74e252cb5a63cb852494ea425 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 podman[98398]: 2026-02-02 11:13:59.985422036 +0000 UTC m=+2.086354997 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb 02 11:14:00 compute-0 systemd[1]: Started libpod-conmon-50c713ca37665d6dd6ad4103811d22d6e85957d74e252cb5a63cb852494ea425.scope.
Feb 02 11:14:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9d3364c1b48382db37042818a78f9c2252f4672aa53e8879fae8c842638c5e4/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:00 compute-0 podman[98398]: 2026-02-02 11:14:00.079820636 +0000 UTC m=+2.180753577 container init 50c713ca37665d6dd6ad4103811d22d6e85957d74e252cb5a63cb852494ea425 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 podman[98398]: 2026-02-02 11:14:00.085968739 +0000 UTC m=+2.186901660 container start 50c713ca37665d6dd6ad4103811d22d6e85957d74e252cb5a63cb852494ea425 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 happy_goldstine[98535]: 65534 65534
Feb 02 11:14:00 compute-0 podman[98398]: 2026-02-02 11:14:00.089264301 +0000 UTC m=+2.190197272 container attach 50c713ca37665d6dd6ad4103811d22d6e85957d74e252cb5a63cb852494ea425 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 systemd[1]: libpod-50c713ca37665d6dd6ad4103811d22d6e85957d74e252cb5a63cb852494ea425.scope: Deactivated successfully.
Feb 02 11:14:00 compute-0 podman[98398]: 2026-02-02 11:14:00.104825848 +0000 UTC m=+2.205758779 container died 50c713ca37665d6dd6ad4103811d22d6e85957d74e252cb5a63cb852494ea425 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9d3364c1b48382db37042818a78f9c2252f4672aa53e8879fae8c842638c5e4-merged.mount: Deactivated successfully.
Feb 02 11:14:00 compute-0 podman[98398]: 2026-02-02 11:14:00.138523665 +0000 UTC m=+2.239456586 container remove 50c713ca37665d6dd6ad4103811d22d6e85957d74e252cb5a63cb852494ea425 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 podman[98398]: 2026-02-02 11:14:00.14157743 +0000 UTC m=+2.242510351 volume remove 1ec1102c04fe52f528772b274e7824308c831c62f4beaf37d3c24d85a503cdbf
Feb 02 11:14:00 compute-0 systemd[1]: libpod-conmon-50c713ca37665d6dd6ad4103811d22d6e85957d74e252cb5a63cb852494ea425.scope: Deactivated successfully.
Feb 02 11:14:00 compute-0 podman[98553]: 2026-02-02 11:14:00.197656945 +0000 UTC m=+0.038135142 volume create bd786d7ac44248f37764deffcc2f5331ecb87ce0debce399cb9f72c45146540b
Feb 02 11:14:00 compute-0 podman[98553]: 2026-02-02 11:14:00.207697827 +0000 UTC m=+0.048176024 container create 9454dc14fd1c11cfd579277301943798d106be1c790d38bdcfc357d110c0d804 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_lamport, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 systemd[1]: Started libpod-conmon-9454dc14fd1c11cfd579277301943798d106be1c790d38bdcfc357d110c0d804.scope.
Feb 02 11:14:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31cab1f8b7bb1312ef56609598e9b34a971332606ac0250ea6ad1afb52502b32/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:00 compute-0 podman[98553]: 2026-02-02 11:14:00.18428592 +0000 UTC m=+0.024764127 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb 02 11:14:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:14:00 2026: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Feb 02 11:14:00 compute-0 podman[98553]: 2026-02-02 11:14:00.288475425 +0000 UTC m=+0.128953642 container init 9454dc14fd1c11cfd579277301943798d106be1c790d38bdcfc357d110c0d804 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_lamport, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 podman[98553]: 2026-02-02 11:14:00.293499697 +0000 UTC m=+0.133977904 container start 9454dc14fd1c11cfd579277301943798d106be1c790d38bdcfc357d110c0d804 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_lamport, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 eager_lamport[98569]: 65534 65534
Feb 02 11:14:00 compute-0 systemd[1]: libpod-9454dc14fd1c11cfd579277301943798d106be1c790d38bdcfc357d110c0d804.scope: Deactivated successfully.
Feb 02 11:14:00 compute-0 podman[98553]: 2026-02-02 11:14:00.297876639 +0000 UTC m=+0.138354836 container attach 9454dc14fd1c11cfd579277301943798d106be1c790d38bdcfc357d110c0d804 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_lamport, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 podman[98553]: 2026-02-02 11:14:00.298184228 +0000 UTC m=+0.138662425 container died 9454dc14fd1c11cfd579277301943798d106be1c790d38bdcfc357d110c0d804 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_lamport, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-31cab1f8b7bb1312ef56609598e9b34a971332606ac0250ea6ad1afb52502b32-merged.mount: Deactivated successfully.
Feb 02 11:14:00 compute-0 podman[98553]: 2026-02-02 11:14:00.334865028 +0000 UTC m=+0.175343225 container remove 9454dc14fd1c11cfd579277301943798d106be1c790d38bdcfc357d110c0d804 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_lamport, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:00 compute-0 podman[98553]: 2026-02-02 11:14:00.339823277 +0000 UTC m=+0.180301494 volume remove bd786d7ac44248f37764deffcc2f5331ecb87ce0debce399cb9f72c45146540b
Feb 02 11:14:00 compute-0 systemd[1]: libpod-conmon-9454dc14fd1c11cfd579277301943798d106be1c790d38bdcfc357d110c0d804.scope: Deactivated successfully.
Feb 02 11:14:00 compute-0 systemd[1]: Reloading.
Feb 02 11:14:00 compute-0 systemd-sysv-generator[98614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:14:00 compute-0 systemd-rc-local-generator[98605]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:14:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:00 compute-0 systemd[1]: Reloading.
Feb 02 11:14:00 compute-0 systemd-rc-local-generator[98652]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:14:00 compute-0 systemd-sysv-generator[98657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:14:00 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:00 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 17 completed events
Feb 02 11:14:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:14:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:01 compute-0 podman[98712]: 2026-02-02 11:14:01.045074951 +0000 UTC m=+0.030077016 volume create 7f59056574df3e3f184de1cd524408228bb23fbe309ce9b32c2d0011f43516fb
Feb 02 11:14:01 compute-0 podman[98712]: 2026-02-02 11:14:01.056831911 +0000 UTC m=+0.041833976 container create 336d9d28e0eb1f2f2dd97a2b6a4292670e7abd45d078ce92b83d7aa30a81bc97 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:01 compute-0 ceph-mon[74676]: pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:14:01 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d53f2c4c49feb3afab235387431b3343aefa4a0bea61a6733342d115b68a27/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d53f2c4c49feb3afab235387431b3343aefa4a0bea61a6733342d115b68a27/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:01 compute-0 podman[98712]: 2026-02-02 11:14:01.114526161 +0000 UTC m=+0.099528236 container init 336d9d28e0eb1f2f2dd97a2b6a4292670e7abd45d078ce92b83d7aa30a81bc97 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:01 compute-0 podman[98712]: 2026-02-02 11:14:01.119031598 +0000 UTC m=+0.104033653 container start 336d9d28e0eb1f2f2dd97a2b6a4292670e7abd45d078ce92b83d7aa30a81bc97 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:01 compute-0 bash[98712]: 336d9d28e0eb1f2f2dd97a2b6a4292670e7abd45d078ce92b83d7aa30a81bc97
Feb 02 11:14:01 compute-0 podman[98712]: 2026-02-02 11:14:01.034465683 +0000 UTC m=+0.019467758 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb 02 11:14:01 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:01.160Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Feb 02 11:14:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:01.160Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Feb 02 11:14:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:01.168Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Feb 02 11:14:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:01.170Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Feb 02 11:14:01 compute-0 sudo[98303]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:01.211Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Feb 02 11:14:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:01.212Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Feb 02 11:14:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:01.217Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Feb 02 11:14:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:01.217Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Feb 02 11:14:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Feb 02 11:14:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:01 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 31b89e85-8621-4f70-af4b-3cb895af1b94 (Updating alertmanager deployment (+1 -> 1))
Feb 02 11:14:01 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 31b89e85-8621-4f70-af4b-3cb895af1b94 (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Feb 02 11:14:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Feb 02 11:14:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:01 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev affd22ff-bbe4-4775-9e27-117e65fc2dc7 (Updating grafana deployment (+1 -> 1))
Feb 02 11:14:01 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Feb 02 11:14:01 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Feb 02 11:14:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Feb 02 11:14:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Feb 02 11:14:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Feb 02 11:14:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Feb 02 11:14:01 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Feb 02 11:14:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Feb 02 11:14:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:01 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Feb 02 11:14:01 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Feb 02 11:14:01 compute-0 sudo[98749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:01 compute-0 sudo[98749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:01 compute-0 sudo[98749]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:01 compute-0 sudo[98774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:01 compute-0 sudo[98774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:14:02 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:02 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:02 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:02 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:02 compute-0 ceph-mon[74676]: Regenerating cephadm self-signed grafana TLS certificates
Feb 02 11:14:02 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:02 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:02 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Feb 02 11:14:02 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111402 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:14:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:03.171Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000672859s
Feb 02 11:14:03 compute-0 ceph-mon[74676]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Feb 02 11:14:03 compute-0 ceph-mon[74676]: Deploying daemon grafana.compute-0 on compute-0
Feb 02 11:14:03 compute-0 ceph-mon[74676]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:14:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:14:05 compute-0 ceph-mon[74676]: pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:14:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:14:05
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'vms', '.nfs', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr']
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Feb 02 11:14:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:14:05 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:14:05 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:14:06 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 18 completed events
Feb 02 11:14:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:14:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Feb 02 11:14:06 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:14:06 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:14:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Feb 02 11:14:06 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Feb 02 11:14:06 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 58115b32-dd30-4039-91e5-43c01f3f0aa2 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb 02 11:14:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:14:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:14:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Feb 02 11:14:07 compute-0 ceph-mon[74676]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:14:07 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:14:07 compute-0 ceph-mon[74676]: osdmap e53: 3 total, 3 up, 3 in
Feb 02 11:14:07 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:14:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:14:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Feb 02 11:14:07 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Feb 02 11:14:07 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev a9a56f69-7141-4171-8b52-75228eb1e120 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb 02 11:14:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:14:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:14:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 639 B/s wr, 2 op/s
Feb 02 11:14:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:14:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:14:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:07 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Scheduled restart job, restart counter is at 1.
Feb 02 11:14:07 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:07 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.762s CPU time.
Feb 02 11:14:07 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:07 compute-0 podman[98842]: 2026-02-02 11:14:07.966514576 +0000 UTC m=+6.026782854 container create b19da92d03bb800336c549ac55c7598603b4535a135ac1b5d8c344afed8f4cd9 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_lamport, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:07 compute-0 podman[98842]: 2026-02-02 11:14:07.951680989 +0000 UTC m=+6.011949287 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb 02 11:14:08 compute-0 systemd[1]: Started libpod-conmon-b19da92d03bb800336c549ac55c7598603b4535a135ac1b5d8c344afed8f4cd9.scope.
Feb 02 11:14:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:08 compute-0 podman[98842]: 2026-02-02 11:14:08.044783094 +0000 UTC m=+6.105051402 container init b19da92d03bb800336c549ac55c7598603b4535a135ac1b5d8c344afed8f4cd9 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_lamport, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 podman[98842]: 2026-02-02 11:14:08.053116118 +0000 UTC m=+6.113384396 container start b19da92d03bb800336c549ac55c7598603b4535a135ac1b5d8c344afed8f4cd9 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_lamport, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 stupefied_lamport[99095]: 472 0
Feb 02 11:14:08 compute-0 podman[98842]: 2026-02-02 11:14:08.056951245 +0000 UTC m=+6.117219553 container attach b19da92d03bb800336c549ac55c7598603b4535a135ac1b5d8c344afed8f4cd9 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_lamport, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 systemd[1]: libpod-b19da92d03bb800336c549ac55c7598603b4535a135ac1b5d8c344afed8f4cd9.scope: Deactivated successfully.
Feb 02 11:14:08 compute-0 conmon[99095]: conmon b19da92d03bb800336c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b19da92d03bb800336c549ac55c7598603b4535a135ac1b5d8c344afed8f4cd9.scope/container/memory.events
Feb 02 11:14:08 compute-0 podman[98842]: 2026-02-02 11:14:08.058372475 +0000 UTC m=+6.118640753 container died b19da92d03bb800336c549ac55c7598603b4535a135ac1b5d8c344afed8f4cd9 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_lamport, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0ce63db5b6e7da84d0640fc6e824b8ad33abdb42a7945b94a2b2bc73ad951d8-merged.mount: Deactivated successfully.
Feb 02 11:14:08 compute-0 podman[98842]: 2026-02-02 11:14:08.097230627 +0000 UTC m=+6.157498905 container remove b19da92d03bb800336c549ac55c7598603b4535a135ac1b5d8c344afed8f4cd9 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_lamport, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 systemd[1]: libpod-conmon-b19da92d03bb800336c549ac55c7598603b4535a135ac1b5d8c344afed8f4cd9.scope: Deactivated successfully.
Feb 02 11:14:08 compute-0 podman[99109]: 2026-02-02 11:14:08.109248144 +0000 UTC m=+0.058089652 container create 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab7a27578d1e241f791e54fbd23b877b9eb049f45e0e7ea8c64c5c05a4e324/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab7a27578d1e241f791e54fbd23b877b9eb049f45e0e7ea8c64c5c05a4e324/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab7a27578d1e241f791e54fbd23b877b9eb049f45e0e7ea8c64c5c05a4e324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab7a27578d1e241f791e54fbd23b877b9eb049f45e0e7ea8c64c5c05a4e324/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.lrvhze-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:08 compute-0 podman[99109]: 2026-02-02 11:14:08.074832488 +0000 UTC m=+0.023673996 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:08 compute-0 podman[99135]: 2026-02-02 11:14:08.189264761 +0000 UTC m=+0.072438945 container create 3bae5c946e7539b309c67a5958bbd842bdbc4d60a0191a86e55449e9eb471e84 (image=quay.io/ceph/grafana:10.4.0, name=bold_kapitsa, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 podman[99109]: 2026-02-02 11:14:08.207300007 +0000 UTC m=+0.156141515 container init 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb 02 11:14:08 compute-0 podman[99109]: 2026-02-02 11:14:08.212674638 +0000 UTC m=+0.161516146 container start 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:14:08 compute-0 bash[99109]: 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28
Feb 02 11:14:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb 02 11:14:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb 02 11:14:08 compute-0 systemd[1]: Started libpod-conmon-3bae5c946e7539b309c67a5958bbd842bdbc4d60a0191a86e55449e9eb471e84.scope.
Feb 02 11:14:08 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:08 compute-0 podman[99135]: 2026-02-02 11:14:08.143116365 +0000 UTC m=+0.026290579 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb 02 11:14:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb 02 11:14:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb 02 11:14:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb 02 11:14:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb 02 11:14:08 compute-0 podman[99135]: 2026-02-02 11:14:08.26863646 +0000 UTC m=+0.151810644 container init 3bae5c946e7539b309c67a5958bbd842bdbc4d60a0191a86e55449e9eb471e84 (image=quay.io/ceph/grafana:10.4.0, name=bold_kapitsa, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 podman[99135]: 2026-02-02 11:14:08.27291601 +0000 UTC m=+0.156090194 container start 3bae5c946e7539b309c67a5958bbd842bdbc4d60a0191a86e55449e9eb471e84 (image=quay.io/ceph/grafana:10.4.0, name=bold_kapitsa, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 bold_kapitsa[99158]: 472 0
Feb 02 11:14:08 compute-0 podman[99135]: 2026-02-02 11:14:08.276300535 +0000 UTC m=+0.159474739 container attach 3bae5c946e7539b309c67a5958bbd842bdbc4d60a0191a86e55449e9eb471e84 (image=quay.io/ceph/grafana:10.4.0, name=bold_kapitsa, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 systemd[1]: libpod-3bae5c946e7539b309c67a5958bbd842bdbc4d60a0191a86e55449e9eb471e84.scope: Deactivated successfully.
Feb 02 11:14:08 compute-0 podman[99135]: 2026-02-02 11:14:08.277064896 +0000 UTC m=+0.160239080 container died 3bae5c946e7539b309c67a5958bbd842bdbc4d60a0191a86e55449e9eb471e84 (image=quay.io/ceph/grafana:10.4.0, name=bold_kapitsa, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb 02 11:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-957ca1dcabb733a36094142e896d8ea3aa99728cf7ccdf7f6abe2a364d6f3c70-merged.mount: Deactivated successfully.
Feb 02 11:14:08 compute-0 podman[99135]: 2026-02-02 11:14:08.314720084 +0000 UTC m=+0.197894268 container remove 3bae5c946e7539b309c67a5958bbd842bdbc4d60a0191a86e55449e9eb471e84 (image=quay.io/ceph/grafana:10.4.0, name=bold_kapitsa, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:08 compute-0 systemd[1]: libpod-conmon-3bae5c946e7539b309c67a5958bbd842bdbc4d60a0191a86e55449e9eb471e84.scope: Deactivated successfully.
Feb 02 11:14:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:14:08 compute-0 systemd[1]: Reloading.
Feb 02 11:14:08 compute-0 systemd-rc-local-generator[99241]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:14:08 compute-0 systemd-sysv-generator[99245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:14:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Feb 02 11:14:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:14:08 compute-0 ceph-mon[74676]: osdmap e54: 3 total, 3 up, 3 in
Feb 02 11:14:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:14:08 compute-0 ceph-mon[74676]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 639 B/s wr, 2 op/s
Feb 02 11:14:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:08 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:14:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:14:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:14:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Feb 02 11:14:08 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Feb 02 11:14:08 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev d36577cb-52d2-4221-9137-07717cb02146 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb 02 11:14:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:14:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:14:08 compute-0 systemd[1]: Reloading.
Feb 02 11:14:08 compute-0 systemd-rc-local-generator[99279]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:14:08 compute-0 systemd-sysv-generator[99283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:14:08 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:09 compute-0 podman[99343]: 2026-02-02 11:14:09.137687553 +0000 UTC m=+0.055433368 container create ff8f27cea151e399f6eadb5452ca669e448a98d1831552766c3153de82cdcaf5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b75e263b41c63dd4087ca520b0d089d14344fb0d579238854c8021a10f127/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b75e263b41c63dd4087ca520b0d089d14344fb0d579238854c8021a10f127/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b75e263b41c63dd4087ca520b0d089d14344fb0d579238854c8021a10f127/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b75e263b41c63dd4087ca520b0d089d14344fb0d579238854c8021a10f127/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b75e263b41c63dd4087ca520b0d089d14344fb0d579238854c8021a10f127/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:09 compute-0 podman[99343]: 2026-02-02 11:14:09.198888672 +0000 UTC m=+0.116634497 container init ff8f27cea151e399f6eadb5452ca669e448a98d1831552766c3153de82cdcaf5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:09 compute-0 podman[99343]: 2026-02-02 11:14:09.106188539 +0000 UTC m=+0.023934434 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb 02 11:14:09 compute-0 podman[99343]: 2026-02-02 11:14:09.206847665 +0000 UTC m=+0.124593470 container start ff8f27cea151e399f6eadb5452ca669e448a98d1831552766c3153de82cdcaf5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:09 compute-0 bash[99343]: ff8f27cea151e399f6eadb5452ca669e448a98d1831552766c3153de82cdcaf5
Feb 02 11:14:09 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:09 compute-0 sudo[98774]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Feb 02 11:14:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:09 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev affd22ff-bbe4-4775-9e27-117e65fc2dc7 (Updating grafana deployment (+1 -> 1))
Feb 02 11:14:09 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event affd22ff-bbe4-4775-9e27-117e65fc2dc7 (Updating grafana deployment (+1 -> 1)) in 8 seconds
Feb 02 11:14:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Feb 02 11:14:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:09 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev a88b9a6c-32cf-47e6-9ed7-f8fc9850ae53 (Updating ingress.rgw.default deployment (+4 -> 4))
Feb 02 11:14:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Feb 02 11:14:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:09 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.xmtqhc on compute-0
Feb 02 11:14:09 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.xmtqhc on compute-0
Feb 02 11:14:09 compute-0 sudo[99377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:09 compute-0 sudo[99377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388587578Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-02-02T11:14:09Z
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388871686Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388878587Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388882047Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388885407Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388888247Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388891207Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388894017Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388897807Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388901267Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388904357Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388907757Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388910688Z level=info msg=Target target=[all]
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388918108Z level=info msg="Path Home" path=/usr/share/grafana
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388920898Z level=info msg="Path Data" path=/var/lib/grafana
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388923918Z level=info msg="Path Logs" path=/var/log/grafana
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388926498Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388929548Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=settings t=2026-02-02T11:14:09.388932808Z level=info msg="App mode production"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=sqlstore t=2026-02-02T11:14:09.389209386Z level=info msg="Connecting to DB" dbtype=sqlite3
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=sqlstore t=2026-02-02T11:14:09.389221326Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.389829443Z level=info msg="Starting DB migrations"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.391264354Z level=info msg="Executing migration" id="create migration_log table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.392331054Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.06594ms
Feb 02 11:14:09 compute-0 sudo[99377]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.395637406Z level=info msg="Executing migration" id="create user table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.396432219Z level=info msg="Migration successfully executed" id="create user table" duration=796.563µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.39826271Z level=info msg="Executing migration" id="add unique index user.login"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.399097264Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=834.854µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.400805602Z level=info msg="Executing migration" id="add unique index user.email"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.401317306Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=511.925µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.402785027Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.403370304Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=585.917µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.404862145Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.405358859Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=497.104µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.406936464Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.408904599Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.967245ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.410642328Z level=info msg="Executing migration" id="create user table v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.411250935Z level=info msg="Migration successfully executed" id="create user table v2" duration=609.347µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.412799868Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.413301242Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=500.944µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.415395011Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.416631196Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.240225ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.41855829Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.419015113Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=464.653µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.420287119Z level=info msg="Executing migration" id="Drop old table user_v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.420930967Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=643.088µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.422680106Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.423800597Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.120911ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.425405112Z level=info msg="Executing migration" id="Update user table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.425429923Z level=info msg="Migration successfully executed" id="Update user table charset" duration=26.161µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.426961216Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.428048757Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.08714ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.429779425Z level=info msg="Executing migration" id="Add missing user data"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.429982671Z level=info msg="Migration successfully executed" id="Add missing user data" duration=204.136µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.434252281Z level=info msg="Executing migration" id="Add is_disabled column to user"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.436060372Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.829311ms
Feb 02 11:14:09 compute-0 sudo[99402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.440026193Z level=info msg="Executing migration" id="Add index user.login/user.email"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.441178605Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.143232ms
Feb 02 11:14:09 compute-0 sudo[99402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.4431312Z level=info msg="Executing migration" id="Add is_service_account column to user"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.444276612Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.145752ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.44634127Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.453884192Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.491551ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.456008162Z level=info msg="Executing migration" id="Add uid column to user"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.457127163Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.118971ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.463384549Z level=info msg="Executing migration" id="Update uid column values for users"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.465286002Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=1.902083ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.468100811Z level=info msg="Executing migration" id="Add unique index user_uid"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.470151449Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=2.049808ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.472630048Z level=info msg="Executing migration" id="create temp user table v1-7"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.473554404Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=924.346µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.475788577Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.476664702Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=876.725µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.478532384Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.479247654Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=714.78µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.480977953Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.481762835Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=764.141µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.48372722Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.484538903Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=811.223µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.486400195Z level=info msg="Executing migration" id="Update temp_user table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.486426656Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=26.921µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.488245977Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.489018239Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=771.992µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.490779518Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.491445477Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=665.069µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.493646069Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.495175832Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.528683ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.497520587Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.498525366Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.009438ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.500642315Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.504046571Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.403416ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.505581014Z level=info msg="Executing migration" id="create temp_user v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.506424337Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=840.483µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.508237298Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.508901137Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=665.799µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.51042425Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.51114901Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=721.77µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.512830687Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.513484046Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=652.959µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.515083661Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.515722628Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=638.558µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.517382595Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.517731495Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=348.96µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.519203815Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.519698199Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=493.884µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.521462459Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.521797168Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=334.349µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.52363135Z level=info msg="Executing migration" id="create star table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.524239547Z level=info msg="Migration successfully executed" id="create star table" duration=607.978µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.526054138Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.526723926Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=669.598µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.52862313Z level=info msg="Executing migration" id="create org table v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.529297229Z level=info msg="Migration successfully executed" id="create org table v1" duration=672.999µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.530920474Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.531613454Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=687.73µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.533244839Z level=info msg="Executing migration" id="create org_user table v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.533912738Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=671.139µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.535601256Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.536358087Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=756.651µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.537917931Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.538633291Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=714.87µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.540129653Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.540812912Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=683.539µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.542450708Z level=info msg="Executing migration" id="Update org table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.542471449Z level=info msg="Migration successfully executed" id="Update org table charset" duration=21.551µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.544336501Z level=info msg="Executing migration" id="Update org_user table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.544355842Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=20.02µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.546024938Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.546181333Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=156.555µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.547931452Z level=info msg="Executing migration" id="create dashboard table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.548677953Z level=info msg="Migration successfully executed" id="create dashboard table" duration=746.261µs
Feb 02 11:14:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.550885765Z level=info msg="Executing migration" id="add index dashboard.account_id"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.551807311Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=922.036µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.55392577Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.554736843Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=810.483µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.556891844Z level=info msg="Executing migration" id="create dashboard_tag table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.557545132Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=644.529µs
Feb 02 11:14:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:14:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.563604362Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.564678722Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.07695ms
Feb 02 11:14:09 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.570777513Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.571658718Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=883.065µs
Feb 02 11:14:09 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 375918c1-f02a-47ae-a9a2-c1b3c4c51345 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb 02 11:14:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Feb 02 11:14:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.580091085Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Feb 02 11:14:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:14:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:14:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:14:09 compute-0 ceph-mon[74676]: osdmap e55: 3 total, 3 up, 3 in
Feb 02 11:14:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:14:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:09 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.585757864Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.641498ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.587678058Z level=info msg="Executing migration" id="create dashboard v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.58846535Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=787.092µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.590419095Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.591073193Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=654.368µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.592659118Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.593400779Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=741.401µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.596383482Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.596677391Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=293.849µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.599089608Z level=info msg="Executing migration" id="drop table dashboard_v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.599767898Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=677.909µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.603307737Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.603366119Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=58.211µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.60483248Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.606077345Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.244925ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.608017569Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.610015575Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.999556ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.611584369Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.612987459Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.40568ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.614490611Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.615128019Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=636.638µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.616642471Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.617980209Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.337118ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.619917353Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.62051458Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=597.277µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.622271059Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.622913077Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=642.558µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.624566084Z level=info msg="Executing migration" id="Update dashboard table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.624595285Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=28.711µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.626687273Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.626710344Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=23.771µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.628468603Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.629984686Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.515673ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.631769216Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.633226397Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.457041ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.634667508Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.636355245Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.686348ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.638229307Z level=info msg="Executing migration" id="Add column uid in dashboard"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.639896194Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.665247ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.642449266Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.642719024Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=271.268µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.644397931Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.645350807Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=951.736µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.647417015Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.648187377Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=770.582µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.649839104Z level=info msg="Executing migration" id="Update dashboard title length"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.649868544Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=30.241µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.651716906Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.652504108Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=786.592µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.654841774Z level=info msg="Executing migration" id="create dashboard_provisioning"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.655987056Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.151102ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.658063845Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.662292023Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.222409ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.664150465Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.664903027Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=753.302µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.666725268Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.667432658Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=709.729µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.669203617Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.669867256Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=663.619µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.671615615Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.671893233Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=277.358µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.673423576Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.673887699Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=463.983µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.675317979Z level=info msg="Executing migration" id="Add check_sum column"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.676730469Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.41257ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.678297043Z level=info msg="Executing migration" id="Add index for dashboard_title"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.679023033Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=722.42µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.680444423Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.680580257Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=133.724µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.682139431Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.682271834Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=132.273µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.68388928Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.684470686Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=581.076µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.686000589Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.687581653Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.580264ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.689157068Z level=info msg="Executing migration" id="create data_source table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.689898308Z level=info msg="Migration successfully executed" id="create data_source table" duration=741.79µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.691856183Z level=info msg="Executing migration" id="add index data_source.account_id"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.692544513Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=688.89µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.694118917Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.694760205Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=640.758µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.696811453Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.697924504Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.117662ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.701762772Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.702379309Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=616.388µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.704514469Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.708783029Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.26813ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.710796235Z level=info msg="Executing migration" id="create data_source table v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.711533046Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=736.521µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.716117775Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.716840435Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=722.32µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.718663826Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.719663404Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=995.608µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.721384963Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.722099793Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=714.27µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.723873612Z level=info msg="Executing migration" id="Add column with_credentials"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.72592968Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.055438ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.727515235Z level=info msg="Executing migration" id="Add secure json data column"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.729224193Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.709788ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.731230559Z level=info msg="Executing migration" id="Update data_source table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.73125561Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=27.121µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.732914336Z level=info msg="Executing migration" id="Update initial version to 1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.733141703Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=227.417µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.734722097Z level=info msg="Executing migration" id="Add read_only data column"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.736817256Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.093979ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.738679328Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.738857053Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=177.605µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.740449178Z level=info msg="Executing migration" id="Update json_data with nulls"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.740601692Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=152.384µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.742279349Z level=info msg="Executing migration" id="Add uid column"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.744592104Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.308265ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.746686193Z level=info msg="Executing migration" id="Update uid value"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.746911589Z level=info msg="Migration successfully executed" id="Update uid value" duration=224.186µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.748672749Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.749452911Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=784.812µs
Feb 02 11:14:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v43: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:14:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:14:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:14:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.756671423Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.757634291Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=964.837µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.759306047Z level=info msg="Executing migration" id="create api_key table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.760044578Z level=info msg="Migration successfully executed" id="create api_key table" duration=738.511µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.761602182Z level=info msg="Executing migration" id="add index api_key.account_id"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.762409025Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=804.722µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.764166604Z level=info msg="Executing migration" id="add index api_key.key"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.764842993Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=676.299µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.76653533Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.767519018Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=982.608µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.770534713Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.771708496Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.177903ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.773451075Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.774138624Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=688.919µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.776967823Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.778068914Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.101791ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.780101961Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.785153833Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.053012ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.788033714Z level=info msg="Executing migration" id="create api_key table v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.789382122Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.353298ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.795197095Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.796193053Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=995.668µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.799817195Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.801084541Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.270636ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.804025323Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.804865457Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=844.144µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.807433519Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.80818146Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=758.441µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.809932279Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.810538566Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=606.087µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.812440389Z level=info msg="Executing migration" id="Update api_key table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.812499361Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=56.022µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.814050585Z level=info msg="Executing migration" id="Add expires to api_key table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.815980149Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.928904ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.817769549Z level=info msg="Executing migration" id="Add service account foreign key"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.819499338Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.729799ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.822206184Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.822466801Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=260.127µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.824274102Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.826078722Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.80448ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.827874253Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.829687684Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.813991ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.832278667Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.83310717Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=828.323µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.836008601Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.836962108Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=957.577µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.839819278Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.841024452Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.209384ms
Feb 02 11:14:09 compute-0 podman[99468]: 2026-02-02 11:14:09.840640011 +0000 UTC m=+0.043699908 container create c61a4049e898de33281a0b064b8a1a3ac08ffaeff5ecdb65c67c8e90f4338715 (image=quay.io/ceph/haproxy:2.3, name=affectionate_lehmann)
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.842887274Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.843545763Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=653.339µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.845396105Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.846142456Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=745.551µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.84806333Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.849001196Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=939.186µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.851367423Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.851422964Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=55.491µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.853168713Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.853202634Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=37.711µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.854857721Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.85695745Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.10012ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.858789161Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.861119676Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.329125ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.862912217Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.862964248Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=52.591µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.864492691Z level=info msg="Executing migration" id="create quota table v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.865057057Z level=info msg="Migration successfully executed" id="create quota table v1" duration=564.196µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.866535088Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.867131025Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=595.607µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.868795422Z level=info msg="Executing migration" id="Update quota table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.868818283Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=22.971µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.870280914Z level=info msg="Executing migration" id="create plugin_setting table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.871025265Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=743.98µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.872372832Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.873100783Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=723.751µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.8747717Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.877151637Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.379677ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.878656319Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.878674269Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=18.85µs
Feb 02 11:14:09 compute-0 systemd[1]: Started libpod-conmon-c61a4049e898de33281a0b064b8a1a3ac08ffaeff5ecdb65c67c8e90f4338715.scope.
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.880244743Z level=info msg="Executing migration" id="create session table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.880964114Z level=info msg="Migration successfully executed" id="create session table" duration=719.721µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.882457676Z level=info msg="Executing migration" id="Drop old table playlist table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.882525277Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=69.562µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.883939317Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.884008359Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=69.902µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.885544782Z level=info msg="Executing migration" id="create playlist table v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.886316254Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=770.742µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.887867748Z level=info msg="Executing migration" id="create playlist item table v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.888547567Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=680.7µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.890341887Z level=info msg="Executing migration" id="Update playlist table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.890366768Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=26.901µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.892017014Z level=info msg="Executing migration" id="Update playlist_item table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.892036295Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=20.331µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.893531697Z level=info msg="Executing migration" id="Add playlist column created_at"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.895871942Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.336686ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.897402515Z level=info msg="Executing migration" id="Add playlist column updated_at"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.899562166Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.162641ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.901269624Z level=info msg="Executing migration" id="drop preferences table v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.901334136Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=65.162µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.902880219Z level=info msg="Executing migration" id="drop preferences table v3"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.902935141Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=55.142µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.904398382Z level=info msg="Executing migration" id="create preferences table v3"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.90505757Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=659.538µs
Feb 02 11:14:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.906647475Z level=info msg="Executing migration" id="Update preferences table charset"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.906672386Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=25.681µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.908468666Z level=info msg="Executing migration" id="Add column team_id in preferences"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.910885484Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.415468ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.912663974Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.912794607Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=130.963µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.914342351Z level=info msg="Executing migration" id="Add column week_start in preferences"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.916599314Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.254923ms
Feb 02 11:14:09 compute-0 podman[99468]: 2026-02-02 11:14:09.82170731 +0000 UTC m=+0.024767227 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.91859866Z level=info msg="Executing migration" id="Add column preferences.json_data"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.921178723Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.580683ms
Feb 02 11:14:09 compute-0 podman[99468]: 2026-02-02 11:14:09.922525761 +0000 UTC m=+0.125585688 container init c61a4049e898de33281a0b064b8a1a3ac08ffaeff5ecdb65c67c8e90f4338715 (image=quay.io/ceph/haproxy:2.3, name=affectionate_lehmann)
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.922958033Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.923009204Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=51.401µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.924595999Z level=info msg="Executing migration" id="Add preferences index org_id"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.925442243Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=846.644µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.926958695Z level=info msg="Executing migration" id="Add preferences index user_id"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.927623264Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=663.909µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.929051394Z level=info msg="Executing migration" id="create alert table v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.929924689Z level=info msg="Migration successfully executed" id="create alert table v1" duration=873.704µs
Feb 02 11:14:09 compute-0 podman[99468]: 2026-02-02 11:14:09.930541406 +0000 UTC m=+0.133601303 container start c61a4049e898de33281a0b064b8a1a3ac08ffaeff5ecdb65c67c8e90f4338715 (image=quay.io/ceph/haproxy:2.3, name=affectionate_lehmann)
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.931908354Z level=info msg="Executing migration" id="add index alert org_id & id "
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.932708687Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=799.983µs
Feb 02 11:14:09 compute-0 podman[99468]: 2026-02-02 11:14:09.93388224 +0000 UTC m=+0.136942167 container attach c61a4049e898de33281a0b064b8a1a3ac08ffaeff5ecdb65c67c8e90f4338715 (image=quay.io/ceph/haproxy:2.3, name=affectionate_lehmann)
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.934476976Z level=info msg="Executing migration" id="add index alert state"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.93533031Z level=info msg="Migration successfully executed" id="add index alert state" duration=852.634µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.936988117Z level=info msg="Executing migration" id="add index alert dashboard_id"
Feb 02 11:14:09 compute-0 affectionate_lehmann[99484]: 0 0
Feb 02 11:14:09 compute-0 systemd[1]: libpod-c61a4049e898de33281a0b064b8a1a3ac08ffaeff5ecdb65c67c8e90f4338715.scope: Deactivated successfully.
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.937705937Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=717.27µs
Feb 02 11:14:09 compute-0 podman[99468]: 2026-02-02 11:14:09.939137477 +0000 UTC m=+0.142197384 container died c61a4049e898de33281a0b064b8a1a3ac08ffaeff5ecdb65c67c8e90f4338715 (image=quay.io/ceph/haproxy:2.3, name=affectionate_lehmann)
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.939529598Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.940262629Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=732.741µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.941933896Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.942697567Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=763.001µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.944298062Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.944999662Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=699.61µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.946479553Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.954661413Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=8.17547ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.956661659Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.958015627Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.357918ms
Feb 02 11:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-398ff6e1d5d31aefa0d69b2271355b3581655ecd96e056f1afbec1a8fe5d0f17-merged.mount: Deactivated successfully.
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.959644053Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.960318082Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=673.599µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.96201396Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.962227696Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=213.866µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.963540752Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.964026256Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=485.064µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.965697233Z level=info msg="Executing migration" id="create alert_notification table v1"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.966262749Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=565.016µs
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.968012268Z level=info msg="Executing migration" id="Add column is_default"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.970708814Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.695936ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.973475491Z level=info msg="Executing migration" id="Add column frequency"
Feb 02 11:14:09 compute-0 podman[99468]: 2026-02-02 11:14:09.974088809 +0000 UTC m=+0.177148706 container remove c61a4049e898de33281a0b064b8a1a3ac08ffaeff5ecdb65c67c8e90f4338715 (image=quay.io/ceph/haproxy:2.3, name=affectionate_lehmann)
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.976303151Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.8271ms
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.978151143Z level=info msg="Executing migration" id="Add column send_reminder"
Feb 02 11:14:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:09.981718853Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.56861ms
Feb 02 11:14:09 compute-0 systemd[1]: libpod-conmon-c61a4049e898de33281a0b064b8a1a3ac08ffaeff5ecdb65c67c8e90f4338715.scope: Deactivated successfully.
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.103710849Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.106566689Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.859891ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.34095957Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.342063841Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.107781ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.345539419Z level=info msg="Executing migration" id="Update alert table charset"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.34558557Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=47.601µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.347584986Z level=info msg="Executing migration" id="Update alert_notification table charset"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.347617877Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=77.682µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.349566782Z level=info msg="Executing migration" id="create notification_journal table v1"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.350340814Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=773.952µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.421470461Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.422400007Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=932.676µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.424494856Z level=info msg="Executing migration" id="drop alert_notification_journal"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.425179735Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=685.439µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.42714191Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.427819909Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=677.639µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.429848626Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.430456683Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=607.797µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.43211981Z level=info msg="Executing migration" id="Add for to alert table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.434525628Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.405618ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.436219225Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.438811888Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.592333ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.440706331Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.441072282Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=366.72µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.443964213Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.445233868Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.274966ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.447124091Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.448014326Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=890.245µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.489645866Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.492848015Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.201969ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.494573374Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.494652256Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=82.692µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.496428566Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.497070664Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=645.268µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.498797422Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.499463641Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=666.289µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.501251351Z level=info msg="Executing migration" id="Drop old annotation table v4"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.501312063Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=61.042µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.503888535Z level=info msg="Executing migration" id="create annotation table v5"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.504564194Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=675.759µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.506244722Z level=info msg="Executing migration" id="add index annotation 0 v3"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.50690579Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=661.138µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.509237666Z level=info msg="Executing migration" id="add index annotation 1 v3"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.509912815Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=675.149µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.511365195Z level=info msg="Executing migration" id="add index annotation 2 v3"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.512044484Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=678.649µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.513704841Z level=info msg="Executing migration" id="add index annotation 3 v3"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.51439345Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=688.729µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.516056407Z level=info msg="Executing migration" id="add index annotation 4 v3"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.516720706Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=664.389µs
Feb 02 11:14:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.518264389Z level=info msg="Executing migration" id="Update annotation table charset"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.51828464Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=20.941µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.519855364Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.523193167Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.337343ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.524820833Z level=info msg="Executing migration" id="Drop category_id index"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.525451901Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=633.068µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.527083637Z level=info msg="Executing migration" id="Add column tags to annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.531688326Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.601359ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.539169656Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.540217446Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.048459ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.541675976Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.542791108Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.114672ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.556167473Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.557159331Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=995.118µs
Feb 02 11:14:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.560975618Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Feb 02 11:14:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:14:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:14:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:14:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Feb 02 11:14:10 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.575334152Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.352103ms
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev 981031e6-9a2e-4b6c-bc82-e49775a6d2a1 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 58115b32-dd30-4039-91e5-43c01f3f0aa2 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 58115b32-dd30-4039-91e5-43c01f3f0aa2 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev a9a56f69-7141-4171-8b52-75228eb1e120 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event a9a56f69-7141-4171-8b52-75228eb1e120 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev d36577cb-52d2-4221-9137-07717cb02146 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event d36577cb-52d2-4221-9137-07717cb02146 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 375918c1-f02a-47ae-a9a2-c1b3c4c51345 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 375918c1-f02a-47ae-a9a2-c1b3c4c51345 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev 981031e6-9a2e-4b6c-bc82-e49775a6d2a1 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Feb 02 11:14:10 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event 981031e6-9a2e-4b6c-bc82-e49775a6d2a1 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.579307953Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.580538128Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.231985ms
Feb 02 11:14:10 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 57 pg[10.0( v 40'48 (0'0,40'48] local-lis/les=39/40 n=8 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=57 pruub=14.056296349s) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 40'47 mlcod 40'47 active pruub 182.976272583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.583584213Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Feb 02 11:14:10 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 57 pg[10.0( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=57 pruub=14.056296349s) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 40'47 mlcod 0'0 unknown pruub 182.976272583s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.584657943Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.07251ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.587797302Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.588073539Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=280.248µs
Feb 02 11:14:10 compute-0 ceph-mon[74676]: Deploying daemon haproxy.rgw.default.compute-0.xmtqhc on compute-0
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.59130098Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Feb 02 11:14:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.592227496Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=932.726µs
Feb 02 11:14:10 compute-0 ceph-mon[74676]: osdmap e56: 3 total, 3 up, 3 in
Feb 02 11:14:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Feb 02 11:14:10 compute-0 ceph-mon[74676]: pgmap v43: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:14:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Feb 02 11:14:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:14:10 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:14:10 compute-0 ceph-mon[74676]: osdmap e57: 3 total, 3 up, 3 in
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.595718234Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.595908089Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=174.695µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.597511374Z level=info msg="Executing migration" id="Add created time to annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.60056854Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.054316ms
Feb 02 11:14:10 compute-0 systemd[1]: Reloading.
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.603229715Z level=info msg="Executing migration" id="Add updated time to annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.607110454Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.879469ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.60912075Z level=info msg="Executing migration" id="Add index for created in annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.610307794Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.187104ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.612241978Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.613320108Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.07816ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.615004546Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.615475639Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=468.304µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.617195497Z level=info msg="Executing migration" id="Add epoch_end column"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.621523879Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.317661ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.62372559Z level=info msg="Executing migration" id="Add index for epoch_end"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.624983766Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.258006ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.626851418Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.627010383Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=159.275µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.628552026Z level=info msg="Executing migration" id="Move region to single row"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.628857095Z level=info msg="Migration successfully executed" id="Move region to single row" duration=305.138µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.630347366Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.6312059Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=857.574µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.633101634Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.634170564Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.075061ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.638449324Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.63937819Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=925.926µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.641445578Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.642147628Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=701.96µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.644101763Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.644874404Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=772.981µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.647689793Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.648585428Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=894.935µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.652628812Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.652855598Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=230.866µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.655499383Z level=info msg="Executing migration" id="create test_data table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.657005835Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.505952ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.658898008Z level=info msg="Executing migration" id="create dashboard_version table v1"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.659773383Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=872.814µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.661786229Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.662697545Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=907.396µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.664818594Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.665912655Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.094471ms
Feb 02 11:14:10 compute-0 systemd-sysv-generator[99533]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.667967723Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.668183229Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=218.826µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.670249247Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.670568866Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=320.199µs
Feb 02 11:14:10 compute-0 systemd-rc-local-generator[99530]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.672764667Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.672923352Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=163.575µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.675249877Z level=info msg="Executing migration" id="create team table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.676098541Z level=info msg="Migration successfully executed" id="create team table" duration=852.824µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.677644224Z level=info msg="Executing migration" id="add index team.org_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.678438757Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=794.653µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.680089273Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.680807283Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=719.89µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.68246272Z level=info msg="Executing migration" id="Add column uid in team"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.686144813Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.685093ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.687862911Z level=info msg="Executing migration" id="Update uid column values in team"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.688011336Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=148.785µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.68957633Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.690412853Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=836.524µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.692051589Z level=info msg="Executing migration" id="create team member table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.6927925Z level=info msg="Migration successfully executed" id="create team member table" duration=740.881µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.694308552Z level=info msg="Executing migration" id="add index team_member.org_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.695140046Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=830.974µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.696839944Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.697824941Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=984.748µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.699691054Z level=info msg="Executing migration" id="add index team_member.team_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.700517247Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=825.593µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.702546114Z level=info msg="Executing migration" id="Add column email to team table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.707221715Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.674971ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.708691966Z level=info msg="Executing migration" id="Add column external to team_member table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.712864773Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.172697ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.714403327Z level=info msg="Executing migration" id="Add column permission to team_member table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.718221674Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.817327ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.719817409Z level=info msg="Executing migration" id="create dashboard acl table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.720727154Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=908.665µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.722472083Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.72343124Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=955.497µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.725486818Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.726410424Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=922.876µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.728629026Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.729579813Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=950.177µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.732940087Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.73376077Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=820.593µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.735396926Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.736266291Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=858.214µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.737758212Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.738611497Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=852.964µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.740199561Z level=info msg="Executing migration" id="add index dashboard_permission"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.741055745Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=855.804µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.742548907Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.743040071Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=488.674µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.744717488Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.744958215Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=240.587µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.746525339Z level=info msg="Executing migration" id="create tag table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.74726914Z level=info msg="Migration successfully executed" id="create tag table" duration=743.181µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.748822813Z level=info msg="Executing migration" id="add index tag.key_value"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.749634126Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=810.653µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.751308353Z level=info msg="Executing migration" id="create login attempt table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.751988722Z level=info msg="Migration successfully executed" id="create login attempt table" duration=680.099µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.753424842Z level=info msg="Executing migration" id="add index login_attempt.username"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.754128022Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=704.13µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.755605574Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.756317994Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=711.68µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.757930619Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.76902306Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=11.089021ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.770875272Z level=info msg="Executing migration" id="create login_attempt v2"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.77149915Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=624.128µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.773348332Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.774057152Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=708.84µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.775481212Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.77577912Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=295.238µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.777305513Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.777930161Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=624.768µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.780104422Z level=info msg="Executing migration" id="create user auth table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.780695818Z level=info msg="Migration successfully executed" id="create user auth table" duration=591.376µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.782719265Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.78359317Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=873.505µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.785297237Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.785365929Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=63.882µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.78680587Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.791880322Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.074482ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.793763045Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.799095425Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.32555ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.8010602Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.806099022Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.034271ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.809057155Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.814345353Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.288839ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.8163861Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.817230104Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=843.814µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.820940658Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.825446575Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.516247ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.828013877Z level=info msg="Executing migration" id="create server_lock table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.828647175Z level=info msg="Migration successfully executed" id="create server_lock table" duration=633.048µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.830203458Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.831411232Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.206704ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.833139971Z level=info msg="Executing migration" id="create user auth token table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.83382354Z level=info msg="Migration successfully executed" id="create user auth token table" duration=683.649µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.835376694Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.836089064Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=711.69µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.837841893Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.838533412Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=690.699µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.84058965Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.841693861Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.104881ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.84344972Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.847879125Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.431985ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.849709676Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.850502688Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=793.172µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.85232605Z level=info msg="Executing migration" id="create cache_data table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.853177194Z level=info msg="Migration successfully executed" id="create cache_data table" duration=850.743µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.854910402Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.855733685Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=823.143µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.857213787Z level=info msg="Executing migration" id="create short_url table v1"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.85803231Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=817.583µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.859555293Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.860322824Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=767.491µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.861723684Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.861848867Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=122.204µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.863327329Z level=info msg="Executing migration" id="delete alert_definition table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.86339508Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=68.172µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.864762119Z level=info msg="Executing migration" id="recreate alert_definition table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.86551173Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=749.371µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.867221548Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.86801539Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=790.882µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.870415678Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.871983192Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.574065ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.873825493Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.873879115Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=54.732µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.87550063Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.876528919Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.022479ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.878116234Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.878856565Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=740.501µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.880354847Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.881151619Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=796.122µs
Feb 02 11:14:10 compute-0 systemd[1]: Reloading.
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.882580929Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.883376832Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=795.612µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.885267935Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.889667808Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.398313ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.891363986Z level=info msg="Executing migration" id="drop alert_definition table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.892194389Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=830.633µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.894131474Z level=info msg="Executing migration" id="delete alert_definition_version table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.894210926Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=80.323µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.896104699Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.897074786Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=971.237µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.898559718Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.899406882Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=847.364µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.901367817Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.902574711Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.211084ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.904262158Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.904310999Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=49.611µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.905732799Z level=info msg="Executing migration" id="drop alert_definition_version table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.906892472Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.162123ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.90859157Z level=info msg="Executing migration" id="create alert_instance table"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.909606168Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.013098ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.911372408Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.912302634Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=929.406µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.913792596Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.914628099Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=836.253µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.91645738Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.922327725Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.869645ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.923955071Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.925395751Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.44089ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.926919194Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.927587063Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=667.979µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.929100345Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.956363441Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.255866ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.958642375Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Feb 02 11:14:10 compute-0 systemd-rc-local-generator[99571]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:14:10 compute-0 systemd-sysv-generator[99576]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.981791535Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.14133ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.983868463Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.984631535Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=763.072µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.98623155Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.987030122Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=799.342µs
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.988546735Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.992625209Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.077674ms
Feb 02 11:14:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:10.996259361Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.001328274Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.068703ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.011756746Z level=info msg="Executing migration" id="create alert_rule table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.012681283Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=926.066µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.017101337Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.018005262Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=904.185µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.023436184Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.024151645Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=714.94µs
Feb 02 11:14:11 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 24 completed events
Feb 02 11:14:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.068428808Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.070106665Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.678207ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.072164833Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.072226425Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=61.591µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.073908272Z level=info msg="Executing migration" id="add column for to alert_rule"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.078566773Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.658601ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.080637521Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Feb 02 11:14:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:11 compute-0 ceph-mgr[74969]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.087708189Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=7.074068ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.090354954Z level=info msg="Executing migration" id="add column labels to alert_rule"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.09698957Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.635096ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.099904362Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.112491755Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=12.543392ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.115691805Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.117082074Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.389309ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.119038299Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.124922064Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.884525ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.128042602Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.132008183Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=3.965351ms
Feb 02 11:14:11 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.xmtqhc for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.133731562Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.135012738Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.280736ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.137338823Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.144956557Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.616154ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.147812387Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.153728173Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.913876ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.155493523Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.155616506Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=123.683µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.157806788Z level=info msg="Executing migration" id="create alert_rule_version table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.158821336Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.015058ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.161118851Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.162608362Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.489701ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.164561357Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.166266405Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.703808ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.168423606Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.16858258Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=160.144µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.170606267Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:11.174Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.004171949s
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.1778443Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.219983ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.180127024Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.187317076Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.190372ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.189247841Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.196067552Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.819092ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.197906624Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.202781711Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.875256ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.210005663Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.214916521Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.911418ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.216693961Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.216819995Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=123.984µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.21878838Z level=info msg="Executing migration" id=create_alert_configuration_table
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.219603043Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=814.623µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.221138766Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.225984172Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.842086ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.227903806Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.228018329Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=115.583µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.2294615Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.234037588Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.574928ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.235623093Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.236614571Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=991.588µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.238301058Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.24336024Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=5.059182ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.245277064Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.246025065Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=747.351µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.247585839Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.248387461Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=801.612µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.249842462Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.254979796Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=5.136504ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.256721105Z level=info msg="Executing migration" id="create provenance_type table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.257396494Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=674.969µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.25902293Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.259915005Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=893.825µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.261601502Z level=info msg="Executing migration" id="create alert_image table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.262297402Z level=info msg="Migration successfully executed" id="create alert_image table" duration=696.03µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.263789834Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.264630347Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=840.783µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.266100299Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.266223132Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=126.214µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.267608031Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.268428984Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=821.263µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.269928906Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.270986006Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.05663ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.272535229Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.272886399Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.27434689Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.274904126Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=556.806µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.276339316Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.277404736Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.06276ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.27898652Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.285289167Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.300207ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.287543411Z level=info msg="Executing migration" id="create library_element table v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.289136415Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.603115ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.290982437Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.29213806Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.155183ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.293525539Z level=info msg="Executing migration" id="create library_element_connection table v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.294250019Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=721.58µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.336844325Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.338389388Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.547803ms
Feb 02 11:14:11 compute-0 podman[99629]: 2026-02-02 11:14:11.293850378 +0000 UTC m=+0.019601162 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.397341824Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.398428304Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.08975ms
Feb 02 11:14:11 compute-0 podman[99629]: 2026-02-02 11:14:11.398902928 +0000 UTC m=+0.124653682 container create ffe9db857d767576fcf63b2c70256806e4d8ff60f0d1247c354cd4f8283d3b82 (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-rgw-default-compute-0-xmtqhc)
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.400342848Z level=info msg="Executing migration" id="increase max description length to 2048"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.40040567Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=63.172µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.402338844Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.402477098Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=139.024µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.40537501Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.405981187Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=611.207µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.41214414Z level=info msg="Executing migration" id="create data_keys table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.413244961Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.10132ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.415268047Z level=info msg="Executing migration" id="create secrets table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.416477401Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.208934ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.419405974Z level=info msg="Executing migration" id="rename data_keys name column to id"
Feb 02 11:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1caeaced505c209f84de45e0b32cb6ce2ca93cfc7b6948d4573cedc6693217/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:11 compute-0 podman[99629]: 2026-02-02 11:14:11.452534314 +0000 UTC m=+0.178285108 container init ffe9db857d767576fcf63b2c70256806e4d8ff60f0d1247c354cd4f8283d3b82 (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-rgw-default-compute-0-xmtqhc)
Feb 02 11:14:11 compute-0 podman[99629]: 2026-02-02 11:14:11.456801594 +0000 UTC m=+0.182552368 container start ffe9db857d767576fcf63b2c70256806e4d8ff60f0d1247c354cd4f8283d3b82 (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-rgw-default-compute-0-xmtqhc)
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.457106982Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=37.686279ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.45987758Z level=info msg="Executing migration" id="add name column into data_keys"
Feb 02 11:14:11 compute-0 bash[99629]: ffe9db857d767576fcf63b2c70256806e4d8ff60f0d1247c354cd4f8283d3b82
Feb 02 11:14:11 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.xmtqhc for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-rgw-default-compute-0-xmtqhc[99644]: [NOTICE] 032/111411 (2) : New worker #1 (4) forked
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.469290574Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=9.406634ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.472101793Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.472310779Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=209.646µs
Feb 02 11:14:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.474449559Z level=info msg="Executing migration" id="rename data_keys name column to label"
Feb 02 11:14:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.028000789s ======
Feb 02 11:14:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:11.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.028000789s
Feb 02 11:14:11 compute-0 sudo[99402]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.510235374Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=35.780165ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.512728964Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Feb 02 11:14:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.548935791Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=36.201257ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.5510548Z level=info msg="Executing migration" id="create kv_store table v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.552216493Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.157763ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.554144297Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.555199957Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.05394ms
Feb 02 11:14:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.556667118Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.556916305Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=248.847µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.558479519Z level=info msg="Executing migration" id="create permission table"
Feb 02 11:14:11 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.tfyreb on compute-2
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.559429445Z level=info msg="Migration successfully executed" id="create permission table" duration=950.786µs
Feb 02 11:14:11 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.tfyreb on compute-2
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.561088162Z level=info msg="Executing migration" id="add unique index permission.role_id"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.562003038Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=915.186µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.563658384Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.564644132Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=981.858µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.566275148Z level=info msg="Executing migration" id="create role table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.567098141Z level=info msg="Migration successfully executed" id="create role table" duration=818.543µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.568631274Z level=info msg="Executing migration" id="add column display_name"
Feb 02 11:14:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.576086493Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.450599ms
Feb 02 11:14:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.578186862Z level=info msg="Executing migration" id="add column group_name"
Feb 02 11:14:11 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.584720076Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.526903ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.58736302Z level=info msg="Executing migration" id="add index role.org_id"
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.12( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1b( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.11( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.10( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.588510102Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.148532ms
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.7( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1f( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1e( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1d( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1c( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.590147538Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1a( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.19( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.6( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.18( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.4( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.591186117Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.036529ms
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.b( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.5( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.3( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.8( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.9( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.d( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.a( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.e( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.c( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.f( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1( v 40'48 (0'0,40'48] local-lis/les=39/40 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.2( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.13( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.14( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.15( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.16( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.17( v 40'48 lc 0'0 (0'0,40'48] local-lis/les=39/40 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.10( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.596550418Z level=info msg="Executing migration" id="add index role_org_id_uid"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.597431093Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=887.404µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.599440729Z level=info msg="Executing migration" id="create team role table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.600113188Z level=info msg="Migration successfully executed" id="create team role table" duration=672.599µs
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1f( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1e( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.12( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1c( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1b( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1a( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1d( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.19( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.18( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.11( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.6( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.4( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.b( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.5( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.8( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.9( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.7( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.e( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.d( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.c( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.a( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.f( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.1( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.0( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 40'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.3( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.13( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.15( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.14( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.17( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.16( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.603832382Z level=info msg="Executing migration" id="add index team_role.org_id"
Feb 02 11:14:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 58 pg[10.2( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=39/39 les/c/f=40/40/0 sis=57) [1] r=0 lpr=57 pi=[39,57)/1 crt=40'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:11 compute-0 ceph-mon[74676]: 8.15 scrub starts
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.604652135Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=819.393µs
Feb 02 11:14:11 compute-0 ceph-mon[74676]: 8.15 scrub ok
Feb 02 11:14:11 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:11 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:11 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:11 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:11 compute-0 ceph-mon[74676]: osdmap e58: 3 total, 3 up, 3 in
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.606845057Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.607909987Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.06449ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.61621084Z level=info msg="Executing migration" id="add index team_role.team_id"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.617448865Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.240495ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.619092861Z level=info msg="Executing migration" id="create user role table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.619958315Z level=info msg="Migration successfully executed" id="create user role table" duration=864.974µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.621411806Z level=info msg="Executing migration" id="add index user_role.org_id"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.622366363Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=954.287µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.623908646Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.624906094Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=997.008µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.626520009Z level=info msg="Executing migration" id="add index user_role.user_id"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.627449976Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=929.696µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.62904171Z level=info msg="Executing migration" id="create builtin role table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.629822462Z level=info msg="Migration successfully executed" id="create builtin role table" duration=780.632µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.631194781Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.632131497Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=939.036µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.633527606Z level=info msg="Executing migration" id="add index builtin_role.name"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.634450392Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=922.026µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.636019806Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.64363707Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.615934ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.645359358Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.646350636Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=991.258µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.647927871Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.648875477Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=946.247µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.650526223Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.652610032Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=2.082369ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.655088812Z level=info msg="Executing migration" id="add unique index role.uid"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.656267195Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.178433ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.658025354Z level=info msg="Executing migration" id="create seed assignment table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.658854087Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=825.893µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.660652608Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.661608395Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=955.617µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.66324098Z level=info msg="Executing migration" id="add column hidden to role table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.670490414Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.238694ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.672345446Z level=info msg="Executing migration" id="permission kind migration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.6788832Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.500413ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.680754582Z level=info msg="Executing migration" id="permission attribute migration"
Feb 02 11:14:11 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.686090342Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.33519ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.687861662Z level=info msg="Executing migration" id="permission identifier migration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.693600153Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.737931ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.695115456Z level=info msg="Executing migration" id="add permission identifier index"
Feb 02 11:14:11 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.69599591Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=879.235µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.699332484Z level=info msg="Executing migration" id="add permission action scope role_id index"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.70061677Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.284716ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.703439749Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.704395876Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=957.077µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.708698827Z level=info msg="Executing migration" id="create query_history table v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.709443308Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=744.241µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.711118415Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.711889697Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=770.922µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.713613585Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.713660926Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=48.221µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.715731484Z level=info msg="Executing migration" id="rbac disabled migrator"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.715790226Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=58.882µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.717310999Z level=info msg="Executing migration" id="teams permissions migration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.71772127Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=410.791µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.719446039Z level=info msg="Executing migration" id="dashboard permissions"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.719992404Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=543.655µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.721827696Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.722424642Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=597.156µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.724191582Z level=info msg="Executing migration" id="drop managed folder create actions"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.724351417Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=159.844µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.725968542Z level=info msg="Executing migration" id="alerting notification permissions"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.726334832Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=367.24µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.727910576Z level=info msg="Executing migration" id="create query_history_star table v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.728668698Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=757.592µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.730377456Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.73124989Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=875.554µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.732949538Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.738934606Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.980678ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.740697045Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.740764917Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=65.062µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.742557728Z level=info msg="Executing migration" id="create correlation table v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.743493314Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=934.466µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.745396977Z level=info msg="Executing migration" id="add index correlations.uid"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.746255832Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=858.374µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.748065302Z level=info msg="Executing migration" id="add index correlations.source_uid"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.748893666Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=825.914µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.750902472Z level=info msg="Executing migration" id="add correlation config column"
Feb 02 11:14:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v46: 322 pgs: 32 peering, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 11:14:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.757449986Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.546754ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.762327433Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.76328339Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=955.217µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.765132652Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.765926344Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=793.012µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.767679763Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.784136535Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=16.451992ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.786270785Z level=info msg="Executing migration" id="create correlation v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.787694425Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.42595ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.789561758Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.790438622Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=876.844µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.792050638Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.792942823Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=891.585µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.794862916Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.795722641Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=859.115µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.797275554Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.79747816Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=199.646µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.799082035Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.799750574Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=656.428µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.802368707Z level=info msg="Executing migration" id="add provisioning column"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.808370776Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.999579ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.810167006Z level=info msg="Executing migration" id="create entity_events table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.810877626Z level=info msg="Migration successfully executed" id="create entity_events table" duration=710.65µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.815189277Z level=info msg="Executing migration" id="create dashboard public config v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.816080592Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=892.735µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.817909684Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.818312535Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.819831688Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.820216638Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.822301027Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.82311628Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=815.283µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.826532716Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.827632697Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.099571ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.829390396Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.830336343Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=945.687µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.831849045Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.832795672Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=942.757µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.834273473Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.83521261Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=938.646µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.83736914Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.839066848Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.704188ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.840876319Z level=info msg="Executing migration" id="Drop public config table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.841708582Z level=info msg="Migration successfully executed" id="Drop public config table" duration=831.864µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.843477692Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.844554982Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.07747ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.84663944Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.847939757Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.301327ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.850496799Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.851599Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.102121ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.853334488Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.854286915Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=953.497µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.855953682Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.875736797Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=19.779375ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.877674122Z level=info msg="Executing migration" id="add annotations_enabled column"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.883805994Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.131812ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.886004936Z level=info msg="Executing migration" id="add time_selection_enabled column"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.892000624Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=5.995688ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.89364897Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.893853826Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=205.126µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.895570734Z level=info msg="Executing migration" id="add share column"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.901556853Z level=info msg="Migration successfully executed" id="add share column" duration=5.984908ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.903547778Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.903787145Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=240.617µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.905475133Z level=info msg="Executing migration" id="create file table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.906426399Z level=info msg="Migration successfully executed" id="create file table" duration=951.207µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.908090556Z level=info msg="Executing migration" id="file table idx: path natural pk"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.909125995Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.034499ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.910899755Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.911876742Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=979.457µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.913547619Z level=info msg="Executing migration" id="create file_meta table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.914327061Z level=info msg="Migration successfully executed" id="create file_meta table" duration=775.202µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.915841384Z level=info msg="Executing migration" id="file table idx: path key"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.916687337Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=841.013µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.918380215Z level=info msg="Executing migration" id="set path collation in file table"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.918433926Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=54.461µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.920207246Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.920283838Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=77.072µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.921857553Z level=info msg="Executing migration" id="managed permissions migration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.922291915Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=434.082µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.924064515Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.92427339Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=210.296µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.925876465Z level=info msg="Executing migration" id="RBAC action name migrator"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.926928765Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.05264ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.928664924Z level=info msg="Executing migration" id="Add UID column to playlist"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.935496086Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.826091ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.937505112Z level=info msg="Executing migration" id="Update uid column values in playlist"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.937693207Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=188.755µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.939538139Z level=info msg="Executing migration" id="Add index for uid in playlist"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.94062148Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.08252ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.942330917Z level=info msg="Executing migration" id="update group index for alert rules"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.94277494Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=444.043µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.944363465Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.94454006Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=176.595µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.94632163Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.946809683Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=488.194µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.948655855Z level=info msg="Executing migration" id="add action column to seed_assignment"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.956555177Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=7.894202ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.958539213Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.966819155Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.271692ms
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.968722258Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.969620154Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=895.316µs
Feb 02 11:14:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:11.971414114Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.042560212Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=71.138628ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.044576169Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.045662399Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.081981ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.047342416Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.048268402Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=926.256µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.049781035Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.06956029Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=19.773525ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.07171119Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.078385778Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.673018ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.080299172Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.080563419Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=264.957µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.082197115Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.082343519Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=147.964µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.084066987Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.084231472Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=164.955µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.085854688Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.086003692Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=149.054µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.087779972Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.087968507Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=188.105µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.089562112Z level=info msg="Executing migration" id="create folder table"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.090390285Z level=info msg="Migration successfully executed" id="create folder table" duration=828.793µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.092125084Z level=info msg="Executing migration" id="Add index for parent_uid"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.093153553Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.028539ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.094683156Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.095676593Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=992.997µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.09769918Z level=info msg="Executing migration" id="Update folder title length"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.097729821Z level=info msg="Migration successfully executed" id="Update folder title length" duration=30.451µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.099234913Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.100114448Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=879.445µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.10161158Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.102411443Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=799.963µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.104325146Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.105239002Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=911.376µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.106758865Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.107151126Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=392.161µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.108783372Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.109037899Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=258.248µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.112163976Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.113411491Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.251025ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.115468669Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.116516299Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.04816ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.118078533Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.118941867Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=863.125µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.12118841Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.122676402Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.492912ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.124147703Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.125033778Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=885.675µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.126567231Z level=info msg="Executing migration" id="create anon_device table"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.127370663Z level=info msg="Migration successfully executed" id="create anon_device table" duration=803.302µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.129264147Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.130270575Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.006218ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.134093182Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.13509492Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.000988ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.137776316Z level=info msg="Executing migration" id="create signing_key table"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.138671001Z level=info msg="Migration successfully executed" id="create signing_key table" duration=894.945µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.14042805Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.141510631Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.0825ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.143240719Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.144446863Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.207374ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.152501289Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.153063905Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=564.786µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.155691959Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.163330783Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.630694ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.16569509Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.166707298Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.013738ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.16856168Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.169802315Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.241225ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.171453121Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.172424939Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=968.827µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.192035159Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.195297411Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=3.290352ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.196926827Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.198012797Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.08651ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.224680976Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.226367033Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.701287ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.238028931Z level=info msg="Executing migration" id="create sso_setting table"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.239540833Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.516872ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.254125643Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.256028096Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.908374ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.275224935Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.275697548Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=478.183µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.278427465Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.278500897Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=74.072µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.281012818Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.288330603Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=7.313125ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.29034193Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.297632434Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.284544ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.300652659Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.301091232Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=438.892µs
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=migrator t=2026-02-02T11:14:12.302631175Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.911410502s
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=sqlstore t=2026-02-02T11:14:12.304368874Z level=info msg="Created default organization"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=secrets t=2026-02-02T11:14:12.306294518Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=plugin.store t=2026-02-02T11:14:12.327491733Z level=info msg="Loading plugins..."
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=local.finder t=2026-02-02T11:14:12.380927403Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=plugin.store t=2026-02-02T11:14:12.380960114Z level=info msg="Plugins loaded" count=55 duration=53.469511ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=query_data t=2026-02-02T11:14:12.389400041Z level=info msg="Query Service initialization"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=live.push_http t=2026-02-02T11:14:12.392251671Z level=info msg="Live Push Gateway initialization"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ngalert.migration t=2026-02-02T11:14:12.394821443Z level=info msg=Starting
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ngalert.migration t=2026-02-02T11:14:12.395130702Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ngalert.migration orgID=1 t=2026-02-02T11:14:12.39542195Z level=info msg="Migrating alerts for organisation"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ngalert.migration orgID=1 t=2026-02-02T11:14:12.395963366Z level=info msg="Alerts found to migrate" alerts=0
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ngalert.migration t=2026-02-02T11:14:12.397475128Z level=info msg="Completed alerting migration"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ngalert.state.manager t=2026-02-02T11:14:12.426995927Z level=info msg="Running in alternative execution of Error/NoData mode"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=infra.usagestats.collector t=2026-02-02T11:14:12.4292393Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=provisioning.datasources t=2026-02-02T11:14:12.430350321Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=provisioning.alerting t=2026-02-02T11:14:12.440079814Z level=info msg="starting to provision alerting"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=provisioning.alerting t=2026-02-02T11:14:12.440105155Z level=info msg="finished to provision alerting"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ngalert.state.manager t=2026-02-02T11:14:12.440394253Z level=info msg="Warming state cache for startup"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ngalert.multiorg.alertmanager t=2026-02-02T11:14:12.440550048Z level=info msg="Starting MultiOrg Alertmanager"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=grafanaStorageLogger t=2026-02-02T11:14:12.440582639Z level=info msg="Storage starting"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=http.server t=2026-02-02T11:14:12.44385528Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=http.server t=2026-02-02T11:14:12.444351394Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=provisioning.dashboard t=2026-02-02T11:14:12.446293559Z level=info msg="starting to provision dashboards"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ngalert.state.manager t=2026-02-02T11:14:12.44920117Z level=info msg="State cache has been initialized" states=0 duration=8.804587ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ngalert.scheduler t=2026-02-02T11:14:12.449241312Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ticker t=2026-02-02T11:14:12.449293783Z level=info msg=starting first_tick=2026-02-02T11:14:20Z
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=plugins.update.checker t=2026-02-02T11:14:12.509167054Z level=info msg="Update check succeeded" duration=59.283895ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=grafana.update.checker t=2026-02-02T11:14:12.521653895Z level=info msg="Update check succeeded" duration=77.213268ms
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=sqlstore.transactions t=2026-02-02T11:14:12.529802534Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Feb 02 11:14:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Feb 02 11:14:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:14:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Feb 02 11:14:12 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Feb 02 11:14:12 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 59 pg[12.0( v 57'61 (0'0,57'61] local-lis/les=50/51 n=7 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=9.919807434s) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 57'60 mlcod 57'60 active pruub 180.851074219s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:12 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 59 pg[12.0( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=9.919807434s) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 57'60 mlcod 0'0 unknown pruub 180.851074219s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:12 compute-0 ceph-mon[74676]: 9.15 scrub starts
Feb 02 11:14:12 compute-0 ceph-mon[74676]: 9.15 scrub ok
Feb 02 11:14:12 compute-0 ceph-mon[74676]: Deploying daemon haproxy.rgw.default.compute-2.tfyreb on compute-2
Feb 02 11:14:12 compute-0 ceph-mon[74676]: 10.10 scrub starts
Feb 02 11:14:12 compute-0 ceph-mon[74676]: 10.10 scrub ok
Feb 02 11:14:12 compute-0 ceph-mon[74676]: pgmap v46: 322 pgs: 32 peering, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:12 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:12 compute-0 ceph-mon[74676]: 9.14 scrub starts
Feb 02 11:14:12 compute-0 ceph-mon[74676]: 9.14 scrub ok
Feb 02 11:14:12 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 11:14:12 compute-0 ceph-mon[74676]: osdmap e59: 3 total, 3 up, 3 in
Feb 02 11:14:12 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Feb 02 11:14:12 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=provisioning.dashboard t=2026-02-02T11:14:12.744483802Z level=info msg="finished to provision dashboards"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=grafana-apiserver t=2026-02-02T11:14:12.85908343Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Feb 02 11:14:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=grafana-apiserver t=2026-02-02T11:14:12.859951565Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Feb 02 11:14:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:13.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:14:13 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:14:13 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb 02 11:14:13 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Feb 02 11:14:13 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:13 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:14:13 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:14:13 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:14:13 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:14:13 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.thdasj on compute-0
Feb 02 11:14:13 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.thdasj on compute-0
Feb 02 11:14:13 compute-0 sudo[99664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:13 compute-0 sudo[99664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:13 compute-0 sudo[99664]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:13 compute-0 sudo[99689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:13 compute-0 sudo[99689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:13.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Feb 02 11:14:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Feb 02 11:14:13 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.11( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.13( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.15( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.10( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.4( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.12( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.7( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.6( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.9( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.8( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.a( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.f( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.c( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.e( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.b( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.d( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.5( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.3( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1f( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1e( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.2( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1c( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1a( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1b( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.19( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.18( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.16( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.17( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.14( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1( v 57'61 (0'0,57'61] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1d( v 57'61 lc 0'0 (0'0,57'61] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.11( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.13( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.15( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.4( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.7( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.8( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.6( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.f( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.a( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.b( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.e( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.c( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.10( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.9( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.12( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.5( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.d( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1f( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1e( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1a( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.0( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 57'60 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1c( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.3( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.19( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.2( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.16( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.18( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1b( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.14( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.1d( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 60 pg[12.17( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=57'61 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:13 compute-0 podman[99755]: 2026-02-02 11:14:13.707539564 +0000 UTC m=+0.042881495 container create 265710d289f0f401be2f30773fc70ecc7ddd36aad885463a128c3118fe4aa9c5 (image=quay.io/ceph/keepalived:2.2.4, name=strange_volhard, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, build-date=2023-02-22T09:23:20, vcs-type=git, architecture=x86_64, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container)
Feb 02 11:14:13 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Feb 02 11:14:13 compute-0 systemd[1]: Started libpod-conmon-265710d289f0f401be2f30773fc70ecc7ddd36aad885463a128c3118fe4aa9c5.scope.
Feb 02 11:14:13 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Feb 02 11:14:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 31 unknown, 32 peering, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:13 compute-0 podman[99755]: 2026-02-02 11:14:13.690359972 +0000 UTC m=+0.025701933 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Feb 02 11:14:13 compute-0 podman[99755]: 2026-02-02 11:14:13.791112921 +0000 UTC m=+0.126454872 container init 265710d289f0f401be2f30773fc70ecc7ddd36aad885463a128c3118fe4aa9c5 (image=quay.io/ceph/keepalived:2.2.4, name=strange_volhard, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc.)
Feb 02 11:14:13 compute-0 podman[99755]: 2026-02-02 11:14:13.798048895 +0000 UTC m=+0.133390826 container start 265710d289f0f401be2f30773fc70ecc7ddd36aad885463a128c3118fe4aa9c5 (image=quay.io/ceph/keepalived:2.2.4, name=strange_volhard, io.buildah.version=1.28.2, distribution-scope=public, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64, description=keepalived for Ceph, com.redhat.component=keepalived-container, version=2.2.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 02 11:14:13 compute-0 podman[99755]: 2026-02-02 11:14:13.80105195 +0000 UTC m=+0.136394021 container attach 265710d289f0f401be2f30773fc70ecc7ddd36aad885463a128c3118fe4aa9c5 (image=quay.io/ceph/keepalived:2.2.4, name=strange_volhard, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Feb 02 11:14:13 compute-0 systemd[1]: libpod-265710d289f0f401be2f30773fc70ecc7ddd36aad885463a128c3118fe4aa9c5.scope: Deactivated successfully.
Feb 02 11:14:13 compute-0 strange_volhard[99771]: 0 0
Feb 02 11:14:13 compute-0 podman[99755]: 2026-02-02 11:14:13.80533354 +0000 UTC m=+0.140675471 container died 265710d289f0f401be2f30773fc70ecc7ddd36aad885463a128c3118fe4aa9c5 (image=quay.io/ceph/keepalived:2.2.4, name=strange_volhard, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived)
Feb 02 11:14:13 compute-0 conmon[99771]: conmon 265710d289f0f401be2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-265710d289f0f401be2f30773fc70ecc7ddd36aad885463a128c3118fe4aa9c5.scope/container/memory.events
Feb 02 11:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ccec252d3a3295569be2fa6f996f1eb7794897b801c3a83e838b880de78d42f-merged.mount: Deactivated successfully.
Feb 02 11:14:13 compute-0 podman[99755]: 2026-02-02 11:14:13.838832701 +0000 UTC m=+0.174174632 container remove 265710d289f0f401be2f30773fc70ecc7ddd36aad885463a128c3118fe4aa9c5 (image=quay.io/ceph/keepalived:2.2.4, name=strange_volhard, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, version=2.2.4, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vcs-type=git, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, release=1793)
Feb 02 11:14:13 compute-0 systemd[1]: libpod-conmon-265710d289f0f401be2f30773fc70ecc7ddd36aad885463a128c3118fe4aa9c5.scope: Deactivated successfully.
Feb 02 11:14:13 compute-0 systemd[1]: Reloading.
Feb 02 11:14:13 compute-0 systemd-sysv-generator[99820]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:14:13 compute-0 systemd-rc-local-generator[99814]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:14:14 compute-0 systemd[1]: Reloading.
Feb 02 11:14:14 compute-0 ceph-mon[74676]: 10.1f scrub starts
Feb 02 11:14:14 compute-0 ceph-mon[74676]: 10.1f scrub ok
Feb 02 11:14:14 compute-0 ceph-mon[74676]: 8.16 scrub starts
Feb 02 11:14:14 compute-0 ceph-mon[74676]: 8.16 scrub ok
Feb 02 11:14:14 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:14 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:14 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:14 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:14 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:14:14 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:14:14 compute-0 ceph-mon[74676]: Deploying daemon keepalived.rgw.default.compute-0.thdasj on compute-0
Feb 02 11:14:14 compute-0 ceph-mon[74676]: osdmap e60: 3 total, 3 up, 3 in
Feb 02 11:14:14 compute-0 systemd-sysv-generator[99862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:14:14 compute-0 systemd-rc-local-generator[99857]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:14:14 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.thdasj for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:14 compute-0 podman[99919]: 2026-02-02 11:14:14.697816231 +0000 UTC m=+0.061705093 container create f90259b474b25f5103bce01fdc744d3859163eb55c7b9faf0a4ef5eec5a94ed9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, architecture=x86_64, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-type=git, version=2.2.4)
Feb 02 11:14:14 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Feb 02 11:14:14 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Feb 02 11:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccb34e96b49c19692a7d1438a709c4d69df6ce95b0c6d0b43e88d1b40859aa4a/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:14 compute-0 sudo[99959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozthrufiuspnyvlvcmabktiwatrwawfu ; /usr/bin/python3'
Feb 02 11:14:14 compute-0 podman[99919]: 2026-02-02 11:14:14.74905095 +0000 UTC m=+0.112939832 container init f90259b474b25f5103bce01fdc744d3859163eb55c7b9faf0a4ef5eec5a94ed9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, release=1793, distribution-scope=public, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Feb 02 11:14:14 compute-0 sudo[99959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:14:14 compute-0 podman[99919]: 2026-02-02 11:14:14.754271897 +0000 UTC m=+0.118160759 container start f90259b474b25f5103bce01fdc744d3859163eb55c7b9faf0a4ef5eec5a94ed9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, name=keepalived, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, distribution-scope=public)
Feb 02 11:14:14 compute-0 bash[99919]: f90259b474b25f5103bce01fdc744d3859163eb55c7b9faf0a4ef5eec5a94ed9
Feb 02 11:14:14 compute-0 podman[99919]: 2026-02-02 11:14:14.676123582 +0000 UTC m=+0.040012474 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Feb 02 11:14:14 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.thdasj for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:14 2026: Starting Keepalived v2.2.4 (08/21,2021)
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:14 2026: Running on Linux 5.14.0-665.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026 (built for Linux 5.14.0)
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:14 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:14 2026: Configuration file /etc/keepalived/keepalived.conf
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:14 2026: Failed to bind to process monitoring socket - errno 98 - Address already in use
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:14 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:14 2026: Starting VRRP child process, pid=4
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:14 2026: Startup complete
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:14:14 2026: (VI_0) Entering BACKUP STATE
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:14 2026: (VI_0) Entering BACKUP STATE (init)
Feb 02 11:14:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:14 2026: VRRP_Script(check_backend) succeeded
Feb 02 11:14:14 compute-0 sudo[99689]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:14 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:14 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb 02 11:14:14 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:14 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:14:14 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:14:14 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:14:14 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:14:14 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.slssio on compute-2
Feb 02 11:14:14 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.slssio on compute-2
Feb 02 11:14:14 compute-0 python3[99964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:14:14 compute-0 podman[99969]: 2026-02-02 11:14:14.953297335 +0000 UTC m=+0.045112017 container create 1defc798d4948d67485f8161f9227b4012197ff2ab001d1e774b7064dd95c7a9 (image=quay.io/ceph/ceph:v19, name=vibrant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:14:14 compute-0 systemd[1]: Started libpod-conmon-1defc798d4948d67485f8161f9227b4012197ff2ab001d1e774b7064dd95c7a9.scope.
Feb 02 11:14:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:15 compute-0 podman[99969]: 2026-02-02 11:14:14.933786138 +0000 UTC m=+0.025600840 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9d1a9710e0e089edf0fbf4bf2dceb684bf47443448b77963f27543fc44f687/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9d1a9710e0e089edf0fbf4bf2dceb684bf47443448b77963f27543fc44f687/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:15 compute-0 podman[99969]: 2026-02-02 11:14:15.044539847 +0000 UTC m=+0.136354549 container init 1defc798d4948d67485f8161f9227b4012197ff2ab001d1e774b7064dd95c7a9 (image=quay.io/ceph/ceph:v19, name=vibrant_goldstine, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:14:15 compute-0 podman[99969]: 2026-02-02 11:14:15.052930663 +0000 UTC m=+0.144745345 container start 1defc798d4948d67485f8161f9227b4012197ff2ab001d1e774b7064dd95c7a9 (image=quay.io/ceph/ceph:v19, name=vibrant_goldstine, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb 02 11:14:15 compute-0 podman[99969]: 2026-02-02 11:14:15.056189465 +0000 UTC m=+0.148004157 container attach 1defc798d4948d67485f8161f9227b4012197ff2ab001d1e774b7064dd95c7a9 (image=quay.io/ceph/ceph:v19, name=vibrant_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:14:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:15.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:15 compute-0 vibrant_goldstine[99984]: could not fetch user info: no user info saved
Feb 02 11:14:15 compute-0 ceph-mon[74676]: 10.1e scrub starts
Feb 02 11:14:15 compute-0 ceph-mon[74676]: 10.1e scrub ok
Feb 02 11:14:15 compute-0 ceph-mon[74676]: pgmap v49: 353 pgs: 31 unknown, 32 peering, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:15 compute-0 ceph-mon[74676]: 9.17 scrub starts
Feb 02 11:14:15 compute-0 ceph-mon[74676]: 9.17 scrub ok
Feb 02 11:14:15 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:15 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:15 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:15 compute-0 systemd[1]: libpod-1defc798d4948d67485f8161f9227b4012197ff2ab001d1e774b7064dd95c7a9.scope: Deactivated successfully.
Feb 02 11:14:15 compute-0 podman[99969]: 2026-02-02 11:14:15.248097774 +0000 UTC m=+0.339912456 container died 1defc798d4948d67485f8161f9227b4012197ff2ab001d1e774b7064dd95c7a9 (image=quay.io/ceph/ceph:v19, name=vibrant_goldstine, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c9d1a9710e0e089edf0fbf4bf2dceb684bf47443448b77963f27543fc44f687-merged.mount: Deactivated successfully.
Feb 02 11:14:15 compute-0 podman[99969]: 2026-02-02 11:14:15.279240798 +0000 UTC m=+0.371055480 container remove 1defc798d4948d67485f8161f9227b4012197ff2ab001d1e774b7064dd95c7a9 (image=quay.io/ceph/ceph:v19, name=vibrant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:14:15 compute-0 systemd[1]: libpod-conmon-1defc798d4948d67485f8161f9227b4012197ff2ab001d1e774b7064dd95c7a9.scope: Deactivated successfully.
Feb 02 11:14:15 compute-0 sudo[99959]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv[98247]: Mon Feb  2 11:14:15 2026: (VI_0) Entering MASTER STATE
Feb 02 11:14:15 compute-0 sudo[100107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqdpbvcmovnradheiefrdjluouiwbffm ; /usr/bin/python3'
Feb 02 11:14:15 compute-0 sudo[100107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:14:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:15.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.532021) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030855532130, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7027, "num_deletes": 259, "total_data_size": 13346770, "memory_usage": 13944448, "flush_reason": "Manual Compaction"}
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Feb 02 11:14:15 compute-0 python3[100109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030855615265, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 11897167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 143, "largest_seqno": 7165, "table_properties": {"data_size": 11871623, "index_size": 16214, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79846, "raw_average_key_size": 24, "raw_value_size": 11808479, "raw_average_value_size": 3591, "num_data_blocks": 714, "num_entries": 3288, "num_filter_entries": 3288, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030600, "oldest_key_time": 1770030600, "file_creation_time": 1770030855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 83315 microseconds, and 21845 cpu microseconds.
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.615342) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 11897167 bytes OK
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.615368) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.616809) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.616832) EVENT_LOG_v1 {"time_micros": 1770030855616824, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.616856) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13314974, prev total WAL file size 13314974, number of live WAL files 2.
Feb 02 11:14:15 compute-0 podman[100112]: 2026-02-02 11:14:15.617862947 +0000 UTC m=+0.059529263 container create 35c72eb3952998bddd3f4414f32e81b5da6c17bb51ee866f8a264a5519603aa6 (image=quay.io/ceph/ceph:v19, name=epic_booth, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.619117) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323539' seq:0, type:0; will stop at (end)
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(58KB) 8(1944B)]
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030855619223, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 11958738, "oldest_snapshot_seqno": -1}
Feb 02 11:14:15 compute-0 systemd[1]: Started libpod-conmon-35c72eb3952998bddd3f4414f32e81b5da6c17bb51ee866f8a264a5519603aa6.scope.
Feb 02 11:14:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:15 compute-0 podman[100112]: 2026-02-02 11:14:15.581222988 +0000 UTC m=+0.022889304 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618caac2417ad110b463c24ab34e2da2e066300477a9a33c26de5247dbdd3d8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618caac2417ad110b463c24ab34e2da2e066300477a9a33c26de5247dbdd3d8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3103 keys, 11940749 bytes, temperature: kUnknown
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030855688892, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 11940749, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11915506, "index_size": 16366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 78691, "raw_average_key_size": 25, "raw_value_size": 11853917, "raw_average_value_size": 3820, "num_data_blocks": 721, "num_entries": 3103, "num_filter_entries": 3103, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770030855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.689469) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 11940749 bytes
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.693828) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.1 rd, 170.8 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.4, 0.0 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3398, records dropped: 295 output_compression: NoCompression
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.693859) EVENT_LOG_v1 {"time_micros": 1770030855693843, "job": 4, "event": "compaction_finished", "compaction_time_micros": 69907, "compaction_time_cpu_micros": 18618, "output_level": 6, "num_output_files": 1, "total_output_size": 11940749, "num_input_records": 3398, "num_output_records": 3103, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030855695297, "job": 4, "event": "table_file_deletion", "file_number": 19}
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030855695372, "job": 4, "event": "table_file_deletion", "file_number": 13}
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030855695417, "job": 4, "event": "table_file_deletion", "file_number": 8}
Feb 02 11:14:15 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:15.618903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:14:15 compute-0 podman[100112]: 2026-02-02 11:14:15.69634057 +0000 UTC m=+0.138006896 container init 35c72eb3952998bddd3f4414f32e81b5da6c17bb51ee866f8a264a5519603aa6 (image=quay.io/ceph/ceph:v19, name=epic_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:14:15 compute-0 podman[100112]: 2026-02-02 11:14:15.702046501 +0000 UTC m=+0.143712797 container start 35c72eb3952998bddd3f4414f32e81b5da6c17bb51ee866f8a264a5519603aa6 (image=quay.io/ceph/ceph:v19, name=epic_booth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:14:15 compute-0 podman[100112]: 2026-02-02 11:14:15.705679953 +0000 UTC m=+0.147346249 container attach 35c72eb3952998bddd3f4414f32e81b5da6c17bb51ee866f8a264a5519603aa6 (image=quay.io/ceph/ceph:v19, name=epic_booth, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:14:15 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Feb 02 11:14:15 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Feb 02 11:14:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 31 unknown, 32 peering, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:15 compute-0 epic_booth[100127]: {
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "user_id": "openstack",
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "display_name": "openstack",
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "email": "",
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "suspended": 0,
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "max_buckets": 1000,
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "subusers": [],
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "keys": [
Feb 02 11:14:15 compute-0 epic_booth[100127]:         {
Feb 02 11:14:15 compute-0 epic_booth[100127]:             "user": "openstack",
Feb 02 11:14:15 compute-0 epic_booth[100127]:             "access_key": "1ASCKWJGE74B3H22DW9V",
Feb 02 11:14:15 compute-0 epic_booth[100127]:             "secret_key": "lSQDd4SllTE7DHr46r3YUIEWX2xYWAX6h1y5FeR6",
Feb 02 11:14:15 compute-0 epic_booth[100127]:             "active": true,
Feb 02 11:14:15 compute-0 epic_booth[100127]:             "create_date": "2026-02-02T11:14:15.858420Z"
Feb 02 11:14:15 compute-0 epic_booth[100127]:         }
Feb 02 11:14:15 compute-0 epic_booth[100127]:     ],
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "swift_keys": [],
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "caps": [],
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "op_mask": "read, write, delete",
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "default_placement": "",
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "default_storage_class": "",
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "placement_tags": [],
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "bucket_quota": {
Feb 02 11:14:15 compute-0 epic_booth[100127]:         "enabled": false,
Feb 02 11:14:15 compute-0 epic_booth[100127]:         "check_on_raw": false,
Feb 02 11:14:15 compute-0 epic_booth[100127]:         "max_size": -1,
Feb 02 11:14:15 compute-0 epic_booth[100127]:         "max_size_kb": 0,
Feb 02 11:14:15 compute-0 epic_booth[100127]:         "max_objects": -1
Feb 02 11:14:15 compute-0 epic_booth[100127]:     },
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "user_quota": {
Feb 02 11:14:15 compute-0 epic_booth[100127]:         "enabled": false,
Feb 02 11:14:15 compute-0 epic_booth[100127]:         "check_on_raw": false,
Feb 02 11:14:15 compute-0 epic_booth[100127]:         "max_size": -1,
Feb 02 11:14:15 compute-0 epic_booth[100127]:         "max_size_kb": 0,
Feb 02 11:14:15 compute-0 epic_booth[100127]:         "max_objects": -1
Feb 02 11:14:15 compute-0 epic_booth[100127]:     },
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "temp_url_keys": [],
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "type": "rgw",
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "mfa_ids": [],
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "account_id": "",
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "path": "/",
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "create_date": "2026-02-02T11:14:15.857918Z",
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "tags": [],
Feb 02 11:14:15 compute-0 epic_booth[100127]:     "group_ids": []
Feb 02 11:14:15 compute-0 epic_booth[100127]: }
Feb 02 11:14:15 compute-0 epic_booth[100127]: 
Feb 02 11:14:15 compute-0 systemd[1]: libpod-35c72eb3952998bddd3f4414f32e81b5da6c17bb51ee866f8a264a5519603aa6.scope: Deactivated successfully.
Feb 02 11:14:15 compute-0 podman[100112]: 2026-02-02 11:14:15.916682358 +0000 UTC m=+0.358348664 container died 35c72eb3952998bddd3f4414f32e81b5da6c17bb51ee866f8a264a5519603aa6 (image=quay.io/ceph/ceph:v19, name=epic_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-618caac2417ad110b463c24ab34e2da2e066300477a9a33c26de5247dbdd3d8d-merged.mount: Deactivated successfully.
Feb 02 11:14:15 compute-0 podman[100112]: 2026-02-02 11:14:15.952432462 +0000 UTC m=+0.394098758 container remove 35c72eb3952998bddd3f4414f32e81b5da6c17bb51ee866f8a264a5519603aa6 (image=quay.io/ceph/ceph:v19, name=epic_booth, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb 02 11:14:15 compute-0 systemd[1]: libpod-conmon-35c72eb3952998bddd3f4414f32e81b5da6c17bb51ee866f8a264a5519603aa6.scope: Deactivated successfully.
Feb 02 11:14:15 compute-0 sudo[100107]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:16 compute-0 ceph-mon[74676]: 10.12 scrub starts
Feb 02 11:14:16 compute-0 ceph-mon[74676]: 10.12 scrub ok
Feb 02 11:14:16 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb 02 11:14:16 compute-0 ceph-mon[74676]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb 02 11:14:16 compute-0 ceph-mon[74676]: Deploying daemon keepalived.rgw.default.compute-2.slssio on compute-2
Feb 02 11:14:16 compute-0 ceph-mon[74676]: 8.17 scrub starts
Feb 02 11:14:16 compute-0 ceph-mon[74676]: 8.17 scrub ok
Feb 02 11:14:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:14:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:14:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb 02 11:14:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:16 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev a88b9a6c-32cf-47e6-9ed7-f8fc9850ae53 (Updating ingress.rgw.default deployment (+4 -> 4))
Feb 02 11:14:16 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event a88b9a6c-32cf-47e6-9ed7-f8fc9850ae53 (Updating ingress.rgw.default deployment (+4 -> 4)) in 7 seconds
Feb 02 11:14:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb 02 11:14:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:16 compute-0 ceph-mgr[74969]: [progress INFO root] update: starting ev be544964-a817-408e-9c2d-1216362b26c8 (Updating prometheus deployment (+1 -> 1))
Feb 02 11:14:16 compute-0 python3[100251]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:14:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Feb 02 11:14:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Feb 02 11:14:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:14:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:14:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:14:16 compute-0 ceph-mgr[74969]: [dashboard INFO request] [192.168.122.100:46950] [GET] [200] [0.136s] [6.3K] [4aef7662-6eac-44b5-9de9-ce3ddae650a9] /
Feb 02 11:14:16 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Feb 02 11:14:16 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Feb 02 11:14:16 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Feb 02 11:14:16 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Feb 02 11:14:16 compute-0 sudo[100252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:16 compute-0 sudo[100252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:16 compute-0 sudo[100252]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:16 compute-0 sudo[100277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:16 compute-0 sudo[100277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:17 compute-0 python3[100326]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:14:17 compute-0 ceph-mgr[74969]: [dashboard INFO request] [192.168.122.100:46952] [GET] [200] [0.002s] [6.3K] [65d2b33c-8a90-4fb2-9f8a-a8f30a80bc4c] /
Feb 02 11:14:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:17.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:17 compute-0 ceph-mon[74676]: 10.1c scrub starts
Feb 02 11:14:17 compute-0 ceph-mon[74676]: 10.1c scrub ok
Feb 02 11:14:17 compute-0 ceph-mon[74676]: pgmap v50: 353 pgs: 31 unknown, 32 peering, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:17 compute-0 ceph-mon[74676]: 9.10 scrub starts
Feb 02 11:14:17 compute-0 ceph-mon[74676]: 9.10 scrub ok
Feb 02 11:14:17 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:17 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:17 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:17 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:17 compute-0 ceph-mon[74676]: 10.1a scrub starts
Feb 02 11:14:17 compute-0 ceph-mon[74676]: 10.1a scrub ok
Feb 02 11:14:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:14:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:14:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:14:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:14:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:17.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:14:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:14:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:14:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:14:17 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Feb 02 11:14:17 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Feb 02 11:14:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:14:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:14:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:14:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Feb 02 11:14:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Feb 02 11:14:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:14:17 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Feb 02 11:14:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:14:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:14:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:14:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 02 11:14:18 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:14:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Feb 02 11:14:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.11( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.342217445s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.953842163s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.15( v 60'57 (0'0,60'57] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.330250740s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 60'56 active pruub 185.941940308s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.15( v 60'57 (0'0,60'57] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.330193520s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 0'0 unknown NOTIFY pruub 185.941940308s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.11( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.342030525s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.953842163s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.10( v 60'64 (0'0,60'64] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.346493721s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=60'62 lcod 60'63 mlcod 60'63 active pruub 187.958190918s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.12( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.346144676s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958236694s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.12( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.346097946s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958236694s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.14( v 60'57 (0'0,60'57] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.329804420s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 60'56 active pruub 185.942001343s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.10( v 60'64 (0'0,60'64] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.346149445s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=60'62 lcod 60'63 mlcod 0'0 unknown NOTIFY pruub 187.958190918s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.14( v 60'57 (0'0,60'57] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.329733849s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 0'0 unknown NOTIFY pruub 185.942001343s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.13( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.329538345s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.941940308s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.13( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.345215797s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.957473755s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.2( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.329447746s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.941894531s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.13( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.329494476s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.941940308s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.2( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.329426765s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.941894531s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.7( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.345029831s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.957641602s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.13( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.345030785s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.957473755s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.7( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.345012665s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.957641602s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.1( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.329119682s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.941726685s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.1( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.329072952s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.941726685s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.f( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.328927994s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.941726685s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.f( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.328914642s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.941726685s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.4( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344757080s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.957595825s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.6( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344784737s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.957626343s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.6( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344754219s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.957626343s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.9( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.345039368s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.957962036s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.9( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.345010757s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.957962036s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.4( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344675064s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.957595825s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.a( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344924927s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958053589s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.a( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344912529s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958053589s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.8( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344503403s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.957717896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.c( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344923973s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958160400s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.c( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344899178s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958160400s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.8( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344451904s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.957717896s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.b( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344563484s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958114624s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.8( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.327331543s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.940963745s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.8( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.327317238s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.940963745s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.e( v 60'62 (0'0,60'62] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344429970s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=60'62 lcod 57'61 mlcod 57'61 active pruub 187.958099365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.b( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344534874s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958114624s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.e( v 60'62 (0'0,60'62] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344339371s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=60'62 lcod 57'61 mlcod 0'0 unknown NOTIFY pruub 187.958099365s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.4( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.327002525s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.940902710s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.4( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.326991081s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.940902710s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.2( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344567299s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958572388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.5( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.326927185s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.940948486s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.5( v 40'48 (0'0,40'48] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.326916695s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.940948486s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.3( v 60'57 (0'0,60'57] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.327651978s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 60'56 active pruub 185.941635132s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.2( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344548225s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958572388s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.3( v 60'57 (0'0,60'57] local-lis/les=57/58 n=1 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.327492714s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 0'0 unknown NOTIFY pruub 185.941635132s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.1e( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344107628s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958343506s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.1e( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.344093323s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958343506s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.18( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.326441765s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.940780640s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.18( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.326425552s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.940780640s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.19( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.326295853s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.940780640s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.19( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.326274872s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.940780640s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.3( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343912125s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958450317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.1a( v 60'64 (0'0,60'64] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343733788s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=60'62 lcod 60'63 mlcod 60'63 active pruub 187.958389282s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.1a( v 60'64 (0'0,60'64] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343689919s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=60'62 lcod 60'63 mlcod 0'0 unknown NOTIFY pruub 187.958389282s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.1c( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343698502s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958450317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.1e( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.325822830s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.940643311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.1e( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.325802803s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.940643311s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.19( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343569756s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958496094s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.1c( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343611717s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958450317s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.3( v 57'61 (0'0,57'61] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343677521s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958450317s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.19( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343556404s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958496094s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.10( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.319291115s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.934326172s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.18( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343444824s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958496094s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.11( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.325754166s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.940872192s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.10( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.319268227s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.934326172s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.11( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.325736046s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.940872192s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.18( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343322754s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958496094s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.17( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343391418s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958755493s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.17( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.343376160s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958755493s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.1b( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.325149536s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.940719604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.12( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.325120926s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 active pruub 185.940673828s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.1b( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.324115753s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.940719604s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.1d( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.342018127s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 active pruub 187.958740234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[10.12( v 40'48 (0'0,40'48] local-lis/les=57/58 n=0 ec=57/39 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.324033737s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=40'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.940673828s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[12.1d( v 57'61 (0'0,57'61] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.341926575s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=57'61 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.958740234s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:18 compute-0 ceph-mon[74676]: Deploying daemon prometheus.compute-0 on compute-0
Feb 02 11:14:18 compute-0 ceph-mon[74676]: 8.10 scrub starts
Feb 02 11:14:18 compute-0 ceph-mon[74676]: 8.10 scrub ok
Feb 02 11:14:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Feb 02 11:14:18 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[8.14( empty local-lis/les=0/0 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[8.17( empty local-lis/les=0/0 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[8.10( empty local-lis/les=0/0 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[8.8( empty local-lis/les=0/0 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[8.4( empty local-lis/les=0/0 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[8.1b( empty local-lis/les=0/0 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[8.19( empty local-lis/les=0/0 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[8.18( empty local-lis/les=0/0 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 61 pg[8.12( empty local-lis/les=0/0 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-rgw-default-compute-0-thdasj[99957]: Mon Feb  2 11:14:18 2026: (VI_0) Entering MASTER STATE
Feb 02 11:14:18 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Feb 02 11:14:18 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Feb 02 11:14:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:19.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Feb 02 11:14:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Feb 02 11:14:19 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Feb 02 11:14:19 compute-0 ceph-mon[74676]: 10.1b scrub starts
Feb 02 11:14:19 compute-0 ceph-mon[74676]: 10.1b scrub ok
Feb 02 11:14:19 compute-0 ceph-mon[74676]: pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:19 compute-0 ceph-mon[74676]: 9.16 deep-scrub starts
Feb 02 11:14:19 compute-0 ceph-mon[74676]: 9.16 deep-scrub ok
Feb 02 11:14:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:14:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:14:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:14:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 02 11:14:19 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:14:19 compute-0 ceph-mon[74676]: osdmap e61: 3 total, 3 up, 3 in
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=61/62 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=61/62 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[8.19( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=61/62 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=61/62 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=61/62 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=61/62 n=1 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[8.8( v 36'6 (0'0,36'6] local-lis/les=61/62 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[8.17( v 36'6 (0'0,36'6] local-lis/les=61/62 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=61/62 n=0 ec=55/35 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:19.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Feb 02 11:14:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Feb 02 11:14:19 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Feb 02 11:14:20 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Feb 02 11:14:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Feb 02 11:14:20 compute-0 ceph-mon[74676]: 10.16 scrub starts
Feb 02 11:14:20 compute-0 ceph-mon[74676]: 10.16 scrub ok
Feb 02 11:14:20 compute-0 ceph-mon[74676]: 11.15 scrub starts
Feb 02 11:14:20 compute-0 ceph-mon[74676]: 11.15 scrub ok
Feb 02 11:14:20 compute-0 ceph-mon[74676]: osdmap e62: 3 total, 3 up, 3 in
Feb 02 11:14:20 compute-0 ceph-mon[74676]: pgmap v54: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:20 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Feb 02 11:14:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 02 11:14:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Feb 02 11:14:20 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Feb 02 11:14:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:20 compute-0 podman[100368]: 2026-02-02 11:14:20.607237869 +0000 UTC m=+3.493235252 volume create 59dffc9f898df729366b94203176cc37f03e4e21669282a720184300aeb42ba8
Feb 02 11:14:20 compute-0 podman[100368]: 2026-02-02 11:14:20.614671408 +0000 UTC m=+3.500668801 container create 561ccca80652ab48c044d0fcee7df9b43c8ff88440faea25d4031704c32f53e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=focused_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 podman[100368]: 2026-02-02 11:14:20.591686512 +0000 UTC m=+3.477683935 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Feb 02 11:14:20 compute-0 systemd[1]: Started libpod-conmon-561ccca80652ab48c044d0fcee7df9b43c8ff88440faea25d4031704c32f53e3.scope.
Feb 02 11:14:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05d90ad3e51f659642055e760c8f48c2de707ecb4199839bd6c75fbeebebed1/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:20 compute-0 podman[100368]: 2026-02-02 11:14:20.698931264 +0000 UTC m=+3.584928677 container init 561ccca80652ab48c044d0fcee7df9b43c8ff88440faea25d4031704c32f53e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=focused_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 podman[100368]: 2026-02-02 11:14:20.70626599 +0000 UTC m=+3.592263373 container start 561ccca80652ab48c044d0fcee7df9b43c8ff88440faea25d4031704c32f53e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=focused_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 focused_wilbur[100625]: 65534 65534
Feb 02 11:14:20 compute-0 systemd[1]: libpod-561ccca80652ab48c044d0fcee7df9b43c8ff88440faea25d4031704c32f53e3.scope: Deactivated successfully.
Feb 02 11:14:20 compute-0 podman[100368]: 2026-02-02 11:14:20.709881501 +0000 UTC m=+3.595878914 container attach 561ccca80652ab48c044d0fcee7df9b43c8ff88440faea25d4031704c32f53e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=focused_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 podman[100368]: 2026-02-02 11:14:20.711221719 +0000 UTC m=+3.597219142 container died 561ccca80652ab48c044d0fcee7df9b43c8ff88440faea25d4031704c32f53e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=focused_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.15 deep-scrub starts
Feb 02 11:14:20 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.15 deep-scrub ok
Feb 02 11:14:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a05d90ad3e51f659642055e760c8f48c2de707ecb4199839bd6c75fbeebebed1-merged.mount: Deactivated successfully.
Feb 02 11:14:20 compute-0 podman[100368]: 2026-02-02 11:14:20.751885401 +0000 UTC m=+3.637882824 container remove 561ccca80652ab48c044d0fcee7df9b43c8ff88440faea25d4031704c32f53e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=focused_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 podman[100368]: 2026-02-02 11:14:20.755128582 +0000 UTC m=+3.641125985 volume remove 59dffc9f898df729366b94203176cc37f03e4e21669282a720184300aeb42ba8
Feb 02 11:14:20 compute-0 systemd[1]: libpod-conmon-561ccca80652ab48c044d0fcee7df9b43c8ff88440faea25d4031704c32f53e3.scope: Deactivated successfully.
Feb 02 11:14:20 compute-0 podman[100642]: 2026-02-02 11:14:20.81666191 +0000 UTC m=+0.036857786 volume create fc131dc787f59b48d0a73d7007a4f4da92f53e7c71d2ceea9a0555ce8152b754
Feb 02 11:14:20 compute-0 podman[100642]: 2026-02-02 11:14:20.826201157 +0000 UTC m=+0.046397033 container create 2c36d36fa3ae15b19fe72e323a99274d63426b760b9b76cdf84aa7096f63c3f9 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 systemd[1]: Started libpod-conmon-2c36d36fa3ae15b19fe72e323a99274d63426b760b9b76cdf84aa7096f63c3f9.scope.
Feb 02 11:14:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfae90ce14bd038eeddab2d33eb58ecc49b78424ec55ad6a6018499320a6e297/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:20 compute-0 podman[100642]: 2026-02-02 11:14:20.803823429 +0000 UTC m=+0.024019325 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Feb 02 11:14:20 compute-0 podman[100642]: 2026-02-02 11:14:20.909375913 +0000 UTC m=+0.129571799 container init 2c36d36fa3ae15b19fe72e323a99274d63426b760b9b76cdf84aa7096f63c3f9 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 podman[100642]: 2026-02-02 11:14:20.913403126 +0000 UTC m=+0.133599002 container start 2c36d36fa3ae15b19fe72e323a99274d63426b760b9b76cdf84aa7096f63c3f9 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 kind_merkle[100659]: 65534 65534
Feb 02 11:14:20 compute-0 systemd[1]: libpod-2c36d36fa3ae15b19fe72e323a99274d63426b760b9b76cdf84aa7096f63c3f9.scope: Deactivated successfully.
Feb 02 11:14:20 compute-0 podman[100642]: 2026-02-02 11:14:20.917394508 +0000 UTC m=+0.137590384 container attach 2c36d36fa3ae15b19fe72e323a99274d63426b760b9b76cdf84aa7096f63c3f9 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 podman[100642]: 2026-02-02 11:14:20.917793859 +0000 UTC m=+0.137989765 container died 2c36d36fa3ae15b19fe72e323a99274d63426b760b9b76cdf84aa7096f63c3f9 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfae90ce14bd038eeddab2d33eb58ecc49b78424ec55ad6a6018499320a6e297-merged.mount: Deactivated successfully.
Feb 02 11:14:20 compute-0 podman[100642]: 2026-02-02 11:14:20.952113983 +0000 UTC m=+0.172309859 container remove 2c36d36fa3ae15b19fe72e323a99274d63426b760b9b76cdf84aa7096f63c3f9 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:20 compute-0 podman[100642]: 2026-02-02 11:14:20.954832029 +0000 UTC m=+0.175027925 volume remove fc131dc787f59b48d0a73d7007a4f4da92f53e7c71d2ceea9a0555ce8152b754
Feb 02 11:14:20 compute-0 systemd[1]: libpod-conmon-2c36d36fa3ae15b19fe72e323a99274d63426b760b9b76cdf84aa7096f63c3f9.scope: Deactivated successfully.
Feb 02 11:14:21 compute-0 systemd[1]: Reloading.
Feb 02 11:14:21 compute-0 systemd-rc-local-generator[100705]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:14:21 compute-0 systemd-sysv-generator[100710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:14:21 compute-0 ceph-mgr[74969]: [progress INFO root] Writing back 25 completed events
Feb 02 11:14:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 11:14:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:21 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event dd016506-bb67-411a-8d4d-49833dc81977 (Global Recovery Event) in 10 seconds
Feb 02 11:14:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:21.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:21 compute-0 systemd[1]: Reloading.
Feb 02 11:14:21 compute-0 ceph-mon[74676]: 10.17 scrub starts
Feb 02 11:14:21 compute-0 ceph-mon[74676]: 10.17 scrub ok
Feb 02 11:14:21 compute-0 ceph-mon[74676]: 9.11 scrub starts
Feb 02 11:14:21 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 02 11:14:21 compute-0 ceph-mon[74676]: 9.11 scrub ok
Feb 02 11:14:21 compute-0 ceph-mon[74676]: osdmap e63: 3 total, 3 up, 3 in
Feb 02 11:14:21 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:21 compute-0 systemd-sysv-generator[100750]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:14:21 compute-0 systemd-rc-local-generator[100747]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:14:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:21.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:21 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:21 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Feb 02 11:14:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 173 B/s, 1 objects/s recovering
Feb 02 11:14:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Feb 02 11:14:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Feb 02 11:14:21 compute-0 podman[100805]: 2026-02-02 11:14:21.757167259 +0000 UTC m=+0.055160900 container create 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:21 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Feb 02 11:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc62ed13538f4f4e339b52d91659ab71b8be96b954176e8aaf913a16ac407587/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc62ed13538f4f4e339b52d91659ab71b8be96b954176e8aaf913a16ac407587/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:21 compute-0 podman[100805]: 2026-02-02 11:14:21.810874618 +0000 UTC m=+0.108868289 container init 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:21 compute-0 podman[100805]: 2026-02-02 11:14:21.814420107 +0000 UTC m=+0.112413748 container start 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:21 compute-0 podman[100805]: 2026-02-02 11:14:21.723859464 +0000 UTC m=+0.021853175 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Feb 02 11:14:21 compute-0 bash[100805]: 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e
Feb 02 11:14:21 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.846Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.846Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.846Z caller=main.go:623 level=info host_details="(Linux 5.14.0-665.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026 x86_64 compute-0 (none))"
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.846Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.846Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.849Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.850Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.853Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.853Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.858Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.858Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.89µs
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.858Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.858Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.858Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=31.711µs wal_replay_duration=482.123µs wbl_replay_duration=260ns total_replay_duration=543.715µs
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.860Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.860Z caller=main.go:1153 level=info msg="TSDB started"
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.860Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Feb 02 11:14:21 compute-0 sudo[100277]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.892Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=31.148455ms db_storage=2.38µs remote_storage=2.9µs web_handler=1.091µs query_engine=1.65µs scrape=3.57492ms scrape_sd=418.702µs notify=30.871µs notify_sd=26.83µs rules=26.483894ms tracing=20.261µs
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.892Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Feb 02 11:14:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0[100820]: ts=2026-02-02T11:14:21.892Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Feb 02 11:14:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Feb 02 11:14:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:21 compute-0 ceph-mgr[74969]: [progress INFO root] complete: finished ev be544964-a817-408e-9c2d-1216362b26c8 (Updating prometheus deployment (+1 -> 1))
Feb 02 11:14:21 compute-0 ceph-mgr[74969]: [progress INFO root] Completed event be544964-a817-408e-9c2d-1216362b26c8 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Feb 02 11:14:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Feb 02 11:14:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:21 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:21 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:21 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:14:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Feb 02 11:14:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 02 11:14:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Feb 02 11:14:22 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Feb 02 11:14:22 compute-0 ceph-mon[74676]: 12.15 deep-scrub starts
Feb 02 11:14:22 compute-0 ceph-mon[74676]: 12.15 deep-scrub ok
Feb 02 11:14:22 compute-0 ceph-mon[74676]: 12.7 scrub starts
Feb 02 11:14:22 compute-0 ceph-mon[74676]: 12.7 scrub ok
Feb 02 11:14:22 compute-0 ceph-mon[74676]: 11.c scrub starts
Feb 02 11:14:22 compute-0 ceph-mon[74676]: 11.c scrub ok
Feb 02 11:14:22 compute-0 ceph-mon[74676]: pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 173 B/s, 1 objects/s recovering
Feb 02 11:14:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Feb 02 11:14:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:22 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:14:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:14:22 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb 02 11:14:22 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb 02 11:14:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Feb 02 11:14:22 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.dhyzzj(active, since 77s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:14:23 compute-0 sshd-session[93342]: Connection closed by 192.168.122.100 port 57808
Feb 02 11:14:23 compute-0 sshd-session[93311]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 11:14:23 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Feb 02 11:14:23 compute-0 systemd[1]: session-35.scope: Consumed 45.355s CPU time.
Feb 02 11:14:23 compute-0 systemd-logind[793]: Session 35 logged out. Waiting for processes to exit.
Feb 02 11:14:23 compute-0 systemd-logind[793]: Removed session 35.
Feb 02 11:14:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ignoring --setuser ceph since I am not root
Feb 02 11:14:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ignoring --setgroup ceph since I am not root
Feb 02 11:14:23 compute-0 ceph-mgr[74969]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb 02 11:14:23 compute-0 ceph-mgr[74969]: pidfile_write: ignore empty --pid-file
Feb 02 11:14:23 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'alerts'
Feb 02 11:14:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:23.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:23.169+0000 7f3d65fb8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:14:23 compute-0 ceph-mgr[74969]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 02 11:14:23 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'balancer'
Feb 02 11:14:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:23.261+0000 7f3d65fb8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:14:23 compute-0 ceph-mgr[74969]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 02 11:14:23 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'cephadm'
Feb 02 11:14:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Feb 02 11:14:23 compute-0 ceph-mon[74676]: 10.0 scrub starts
Feb 02 11:14:23 compute-0 ceph-mon[74676]: 10.3 scrub starts
Feb 02 11:14:23 compute-0 ceph-mon[74676]: 10.0 scrub ok
Feb 02 11:14:23 compute-0 ceph-mon[74676]: 10.3 scrub ok
Feb 02 11:14:23 compute-0 ceph-mon[74676]: 11.0 scrub starts
Feb 02 11:14:23 compute-0 ceph-mon[74676]: 11.0 scrub ok
Feb 02 11:14:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 02 11:14:23 compute-0 ceph-mon[74676]: osdmap e64: 3 total, 3 up, 3 in
Feb 02 11:14:23 compute-0 ceph-mon[74676]: 10.c scrub starts
Feb 02 11:14:23 compute-0 ceph-mon[74676]: 12.1a scrub starts
Feb 02 11:14:23 compute-0 ceph-mon[74676]: from='mgr.14490 192.168.122.100:0/3852372066' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Feb 02 11:14:23 compute-0 ceph-mon[74676]: mgrmap e27: compute-0.dhyzzj(active, since 77s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:14:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Feb 02 11:14:23 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Feb 02 11:14:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:23.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:23 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb 02 11:14:23 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb 02 11:14:24 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'crash'
Feb 02 11:14:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:24.079+0000 7f3d65fb8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:14:24 compute-0 ceph-mgr[74969]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 02 11:14:24 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'dashboard'
Feb 02 11:14:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Feb 02 11:14:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Feb 02 11:14:24 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Feb 02 11:14:24 compute-0 ceph-mon[74676]: 10.c scrub ok
Feb 02 11:14:24 compute-0 ceph-mon[74676]: 12.1a scrub ok
Feb 02 11:14:24 compute-0 ceph-mon[74676]: 11.b scrub starts
Feb 02 11:14:24 compute-0 ceph-mon[74676]: 11.b scrub ok
Feb 02 11:14:24 compute-0 ceph-mon[74676]: osdmap e65: 3 total, 3 up, 3 in
Feb 02 11:14:24 compute-0 ceph-mon[74676]: 8.6 deep-scrub starts
Feb 02 11:14:24 compute-0 ceph-mon[74676]: 8.6 deep-scrub ok
Feb 02 11:14:24 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'devicehealth'
Feb 02 11:14:24 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Feb 02 11:14:24 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Feb 02 11:14:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:24.786+0000 7f3d65fb8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:14:24 compute-0 ceph-mgr[74969]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 02 11:14:24 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'diskprediction_local'
Feb 02 11:14:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 02 11:14:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 02 11:14:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]:   from numpy import show_config as show_numpy_config
Feb 02 11:14:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:24.931+0000 7f3d65fb8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:14:24 compute-0 ceph-mgr[74969]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 02 11:14:24 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'influx'
Feb 02 11:14:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:24.998+0000 7f3d65fb8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:14:25 compute-0 ceph-mgr[74969]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 02 11:14:25 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'insights'
Feb 02 11:14:25 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'iostat'
Feb 02 11:14:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:25.121+0000 7f3d65fb8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:14:25 compute-0 ceph-mgr[74969]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 02 11:14:25 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'k8sevents'
Feb 02 11:14:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:25 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:14:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:25 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:14:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:25 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:14:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Feb 02 11:14:25 compute-0 ceph-mon[74676]: 10.a scrub starts
Feb 02 11:14:25 compute-0 ceph-mon[74676]: 10.a scrub ok
Feb 02 11:14:25 compute-0 ceph-mon[74676]: 11.9 scrub starts
Feb 02 11:14:25 compute-0 ceph-mon[74676]: 11.9 scrub ok
Feb 02 11:14:25 compute-0 ceph-mon[74676]: osdmap e66: 3 total, 3 up, 3 in
Feb 02 11:14:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Feb 02 11:14:25 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Feb 02 11:14:25 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'localpool'
Feb 02 11:14:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:25.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:25 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mds_autoscaler'
Feb 02 11:14:25 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.f deep-scrub starts
Feb 02 11:14:25 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.f deep-scrub ok
Feb 02 11:14:25 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'mirroring'
Feb 02 11:14:25 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'nfs'
Feb 02 11:14:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:26.111+0000 7f3d65fb8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'orchestrator'
Feb 02 11:14:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:26.332+0000 7f3d65fb8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_perf_query'
Feb 02 11:14:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:26.434+0000 7f3d65fb8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'osd_support'
Feb 02 11:14:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Feb 02 11:14:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Feb 02 11:14:26 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Feb 02 11:14:26 compute-0 ceph-mon[74676]: 10.e deep-scrub starts
Feb 02 11:14:26 compute-0 ceph-mon[74676]: 10.e deep-scrub ok
Feb 02 11:14:26 compute-0 ceph-mon[74676]: 11.d deep-scrub starts
Feb 02 11:14:26 compute-0 ceph-mon[74676]: 11.d deep-scrub ok
Feb 02 11:14:26 compute-0 ceph-mon[74676]: osdmap e67: 3 total, 3 up, 3 in
Feb 02 11:14:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:26.512+0000 7f3d65fb8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'pg_autoscaler'
Feb 02 11:14:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:26.596+0000 7f3d65fb8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'progress'
Feb 02 11:14:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:26.670+0000 7f3d65fb8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 02 11:14:26 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'prometheus'
Feb 02 11:14:26 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Feb 02 11:14:26 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Feb 02 11:14:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:27.027+0000 7f3d65fb8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:14:27 compute-0 ceph-mgr[74969]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 02 11:14:27 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rbd_support'
Feb 02 11:14:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:27.136+0000 7f3d65fb8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:14:27 compute-0 ceph-mgr[74969]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 02 11:14:27 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'restful'
Feb 02 11:14:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:27.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:27 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rgw'
Feb 02 11:14:27 compute-0 ceph-mon[74676]: 8.1c scrub starts
Feb 02 11:14:27 compute-0 ceph-mon[74676]: 8.1c scrub ok
Feb 02 11:14:27 compute-0 ceph-mon[74676]: 12.f deep-scrub starts
Feb 02 11:14:27 compute-0 ceph-mon[74676]: 12.f deep-scrub ok
Feb 02 11:14:27 compute-0 ceph-mon[74676]: 8.e scrub starts
Feb 02 11:14:27 compute-0 ceph-mon[74676]: 8.e scrub ok
Feb 02 11:14:27 compute-0 ceph-mon[74676]: osdmap e68: 3 total, 3 up, 3 in
Feb 02 11:14:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:27.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:27.617+0000 7f3d65fb8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:14:27 compute-0 ceph-mgr[74969]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 02 11:14:27 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'rook'
Feb 02 11:14:27 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.d scrub starts
Feb 02 11:14:27 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.d scrub ok
Feb 02 11:14:27 compute-0 sshd-session[100873]: Invalid user jupyter from 80.94.92.186 port 43522
Feb 02 11:14:28 compute-0 sshd-session[100873]: Connection closed by invalid user jupyter 80.94.92.186 port 43522 [preauth]
Feb 02 11:14:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:28.203+0000 7f3d65fb8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'selftest'
Feb 02 11:14:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:28.281+0000 7f3d65fb8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'snap_schedule'
Feb 02 11:14:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:28.359+0000 7f3d65fb8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'stats'
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'status'
Feb 02 11:14:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:28.524+0000 7f3d65fb8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telegraf'
Feb 02 11:14:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:28.605+0000 7f3d65fb8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'telemetry'
Feb 02 11:14:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:28.771+0000 7f3d65fb8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 02 11:14:28 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'test_orchestrator'
Feb 02 11:14:28 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb 02 11:14:28 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb 02 11:14:28 compute-0 ceph-mon[74676]: 10.9 scrub starts
Feb 02 11:14:28 compute-0 ceph-mon[74676]: 10.9 scrub ok
Feb 02 11:14:28 compute-0 ceph-mon[74676]: 8.1f scrub starts
Feb 02 11:14:28 compute-0 ceph-mon[74676]: 8.1f scrub ok
Feb 02 11:14:28 compute-0 ceph-mon[74676]: 11.2 scrub starts
Feb 02 11:14:28 compute-0 ceph-mon[74676]: 11.2 scrub ok
Feb 02 11:14:28 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.iybsjv restarted
Feb 02 11:14:28 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.iybsjv started
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:29.007+0000 7f3d65fb8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'volumes'
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zebspe restarted
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zebspe started
Feb 02 11:14:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:29.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111429 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:29.323+0000 7f3d65fb8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr[py] Loading python module 'zabbix'
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:29.409+0000 7f3d65fb8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dhyzzj restarted
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dhyzzj
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: ms_deliver_dispatch: unhandled message 0x5592866a1860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr handle_mgr_map Activating!
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr handle_mgr_map I am now activating
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.dhyzzj(active, starting, since 0.0303742s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.kwzngg"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.kwzngg"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e9 all = 0
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.mzpewh"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.mzpewh"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e9 all = 0
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.ajwnpf"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ajwnpf"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e9 all = 0
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).mds e9 all = 1
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: balancer
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : Manager daemon compute-0.dhyzzj is now available
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Starting
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:14:29
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: cephadm
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: crash
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: dashboard
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO access_control] Loading user roles DB version=2
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO sso] Loading SSO DB version=1
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO root] Configured CherryPy, starting engine...
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: devicehealth
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Starting
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: iostat
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: nfs
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: orchestrator
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: pg_autoscaler
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: progress
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:14:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:29.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [progress INFO root] Loading...
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f3ce5ab0520>, <progress.module.GhostEvent object at 0x7f3ce5ab0550>, <progress.module.GhostEvent object at 0x7f3ce5ab0580>, <progress.module.GhostEvent object at 0x7f3ce5ab05b0>, <progress.module.GhostEvent object at 0x7f3ce5ab05e0>, <progress.module.GhostEvent object at 0x7f3ce5ab0610>, <progress.module.GhostEvent object at 0x7f3ce5ab0640>, <progress.module.GhostEvent object at 0x7f3ce5ab0670>, <progress.module.GhostEvent object at 0x7f3ce5ab06a0>, <progress.module.GhostEvent object at 0x7f3ce5ab06d0>, <progress.module.GhostEvent object at 0x7f3ce5ab0700>, <progress.module.GhostEvent object at 0x7f3ce5ab0730>, <progress.module.GhostEvent object at 0x7f3ce5ab0760>, <progress.module.GhostEvent object at 0x7f3ce5ab0790>, <progress.module.GhostEvent object at 0x7f3ce5ab07c0>, <progress.module.GhostEvent object at 0x7f3ce5ab07f0>, <progress.module.GhostEvent object at 0x7f3ce5ab0820>, <progress.module.GhostEvent object at 0x7f3ce5ab0850>, <progress.module.GhostEvent object at 0x7f3ce5ab0880>, <progress.module.GhostEvent object at 0x7f3ce5ab08b0>, <progress.module.GhostEvent object at 0x7f3ce5ab08e0>, <progress.module.GhostEvent object at 0x7f3ce5ab0910>, <progress.module.GhostEvent object at 0x7f3ce5ab0940>, <progress.module.GhostEvent object at 0x7f3ce5ab0970>, <progress.module.GhostEvent object at 0x7f3ce5ab09a0>] historic events
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [progress INFO root] Loaded OSDMap, ready.
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: prometheus
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] recovery thread starting
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] starting setup
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: rbd_support
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: restful
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [restful INFO root] server_addr: :: server_port: 8003
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: status
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: telemetry
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [prometheus INFO root] server_addr: :: server_port: 9283
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [prometheus INFO root] Cache enabled
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"} v 0)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [prometheus INFO root] starting metric collection thread
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [prometheus INFO root] Starting engine...
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: [02/Feb/2026:11:14:29] ENGINE Bus STARTING
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [restful WARNING root] server not running: no certificate configured
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.error] [02/Feb/2026:11:14:29] ENGINE Bus STARTING
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: CherryPy Checker:
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: The Application mounted at '' has an empty config.
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] PerfHandler: starting
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: vms, start_after=
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: volumes, start_after=
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: mgr load Constructed class from module: volumes
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: backups, start_after=
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:29.636+0000 7f3cce7c1640 -1 client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:29.636+0000 7f3cd49cd640 -1 client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:29.636+0000 7f3cd49cd640 -1 client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:29.636+0000 7f3cd49cd640 -1 client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:29.636+0000 7f3cd49cd640 -1 client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:14:29.636+0000 7f3cd49cd640 -1 client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: client.0 error registering admin socket command: (17) File exists
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_task_task: images, start_after=
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TaskHandler: starting
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"} v 0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] setup complete
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: [02/Feb/2026:11:14:29] ENGINE Serving on http://:::9283
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.error] [02/Feb/2026:11:14:29] ENGINE Serving on http://:::9283
Feb 02 11:14:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: [02/Feb/2026:11:14:29] ENGINE Bus STARTED
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.error] [02/Feb/2026:11:14:29] ENGINE Bus STARTED
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [prometheus INFO root] Engine started.
Feb 02 11:14:29 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.d scrub starts
Feb 02 11:14:29 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.d scrub ok
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Feb 02 11:14:29 compute-0 sshd-session[101034]: Accepted publickey for ceph-admin from 192.168.122.100 port 38876 ssh2: RSA SHA256:eWUWGDEwQPDg9/s9CpsGuwnfzTveEUFcQS7GO5kGIdo
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Feb 02 11:14:29 compute-0 systemd-logind[793]: New session 37 of user ceph-admin.
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Feb 02 11:14:29 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Feb 02 11:14:29 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Feb 02 11:14:29 compute-0 sshd-session[101034]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 10.d scrub starts
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 10.d scrub ok
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 11.19 scrub starts
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 11.19 scrub ok
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 8.1 scrub starts
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 8.1 scrub ok
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 11.3 scrub starts
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 10.b scrub starts
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 11.3 scrub ok
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 10.b scrub ok
Feb 02 11:14:29 compute-0 ceph-mon[74676]: Standby manager daemon compute-1.iybsjv restarted
Feb 02 11:14:29 compute-0 ceph-mon[74676]: Standby manager daemon compute-1.iybsjv started
Feb 02 11:14:29 compute-0 ceph-mon[74676]: Standby manager daemon compute-2.zebspe restarted
Feb 02 11:14:29 compute-0 ceph-mon[74676]: Standby manager daemon compute-2.zebspe started
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 8.0 scrub starts
Feb 02 11:14:29 compute-0 ceph-mon[74676]: 8.0 scrub ok
Feb 02 11:14:29 compute-0 ceph-mon[74676]: Active manager daemon compute-0.dhyzzj restarted
Feb 02 11:14:29 compute-0 ceph-mon[74676]: Activating manager daemon compute-0.dhyzzj
Feb 02 11:14:29 compute-0 ceph-mon[74676]: osdmap e69: 3 total, 3 up, 3 in
Feb 02 11:14:29 compute-0 ceph-mon[74676]: mgrmap e28: compute-0.dhyzzj(active, starting, since 0.0303742s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.kwzngg"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.mzpewh"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ajwnpf"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dhyzzj", "id": "compute-0.dhyzzj"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zebspe", "id": "compute-2.zebspe"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.iybsjv", "id": "compute-1.iybsjv"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: Manager daemon compute-0.dhyzzj is now available
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/mirror_snapshot_schedule"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:14:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dhyzzj/trash_purge_schedule"}]: dispatch
Feb 02 11:14:30 compute-0 sudo[101048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:30 compute-0 sudo[101048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:30 compute-0 sudo[101048]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:30 compute-0 sudo[101074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:14:30 compute-0 sudo[101074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:30 compute-0 ceph-mgr[74969]: [dashboard INFO dashboard.module] Engine started.
Feb 02 11:14:30 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.dhyzzj(active, since 1.05114s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:14:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:30 compute-0 podman[101173]: 2026-02-02 11:14:30.617673314 +0000 UTC m=+0.061168359 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:14:30 compute-0 podman[101173]: 2026-02-02 11:14:30.716223611 +0000 UTC m=+0.159718656 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:14:30 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Feb 02 11:14:30 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Feb 02 11:14:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:31.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:31 compute-0 podman[101307]: 2026-02-02 11:14:31.283614903 +0000 UTC m=+0.163880113 container exec c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:31 compute-0 podman[101307]: 2026-02-02 11:14:31.318205824 +0000 UTC m=+0.198471014 container exec_died c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:14:31] ENGINE Bus STARTING
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:14:31] ENGINE Bus STARTING
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000005:nfs.cephfs.2: -2
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb 02 11:14:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:14:31] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:14:31] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Feb 02 11:14:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Feb 02 11:14:31 compute-0 ceph-mon[74676]: 11.8 scrub starts
Feb 02 11:14:31 compute-0 ceph-mon[74676]: 11.8 scrub ok
Feb 02 11:14:31 compute-0 ceph-mon[74676]: 12.d scrub starts
Feb 02 11:14:31 compute-0 ceph-mon[74676]: 12.d scrub ok
Feb 02 11:14:31 compute-0 ceph-mon[74676]: 8.7 scrub starts
Feb 02 11:14:31 compute-0 ceph-mon[74676]: 8.7 scrub ok
Feb 02 11:14:31 compute-0 ceph-mon[74676]: mgrmap e29: compute-0.dhyzzj(active, since 1.05114s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:14:31 compute-0 ceph-mon[74676]: pgmap v3: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:31 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.dhyzzj(active, since 2s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:14:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Feb 02 11:14:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 02 11:14:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Feb 02 11:14:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:31.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:31 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Feb 02 11:14:31 compute-0 podman[101418]: 2026-02-02 11:14:31.556409153 +0000 UTC m=+0.047191376 container exec 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:14:31] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:14:31] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:14:31] ENGINE Client ('192.168.122.100', 51448) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:14:31] ENGINE Bus STARTED
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:14:31] ENGINE Client ('192.168.122.100', 51448) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:14:31] ENGINE Bus STARTED
Feb 02 11:14:31 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Check health
Feb 02 11:14:31 compute-0 podman[101449]: 2026-02-02 11:14:31.622959662 +0000 UTC m=+0.049322216 container exec_died 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:14:31 compute-0 podman[101418]: 2026-02-02 11:14:31.628881428 +0000 UTC m=+0.119663631 container exec_died 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 02 11:14:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:14:31 compute-0 podman[101493]: 2026-02-02 11:14:31.825646213 +0000 UTC m=+0.067983730 container exec 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:14:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:31 compute-0 podman[101493]: 2026-02-02 11:14:31.835117539 +0000 UTC m=+0.077455036 container exec_died 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:14:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:14:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:31 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Feb 02 11:14:31 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Feb 02 11:14:32 compute-0 podman[101558]: 2026-02-02 11:14:32.020222937 +0000 UTC m=+0.053332159 container exec 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, architecture=x86_64, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, distribution-scope=public, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, release=1793)
Feb 02 11:14:32 compute-0 podman[101577]: 2026-02-02 11:14:32.087004002 +0000 UTC m=+0.051103066 container exec_died 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, io.openshift.expose-services=, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Feb 02 11:14:32 compute-0 podman[101558]: 2026-02-02 11:14:32.092103606 +0000 UTC m=+0.125212818 container exec_died 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20)
Feb 02 11:14:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:14:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:14:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:32 compute-0 podman[101622]: 2026-02-02 11:14:32.264797855 +0000 UTC m=+0.045312764 container exec 336d9d28e0eb1f2f2dd97a2b6a4292670e7abd45d078ce92b83d7aa30a81bc97 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:32 compute-0 podman[101622]: 2026-02-02 11:14:32.291235747 +0000 UTC m=+0.071750656 container exec_died 336d9d28e0eb1f2f2dd97a2b6a4292670e7abd45d078ce92b83d7aa30a81bc97 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:32 compute-0 podman[101693]: 2026-02-02 11:14:32.476075598 +0000 UTC m=+0.054010308 container exec ff8f27cea151e399f6eadb5452ca669e448a98d1831552766c3153de82cdcaf5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:32 compute-0 ceph-mon[74676]: 8.b scrub starts
Feb 02 11:14:32 compute-0 ceph-mon[74676]: 8.b scrub ok
Feb 02 11:14:32 compute-0 ceph-mon[74676]: 12.5 scrub starts
Feb 02 11:14:32 compute-0 ceph-mon[74676]: 12.5 scrub ok
Feb 02 11:14:32 compute-0 ceph-mon[74676]: 11.6 scrub starts
Feb 02 11:14:32 compute-0 ceph-mon[74676]: 11.6 scrub ok
Feb 02 11:14:32 compute-0 ceph-mon[74676]: [02/Feb/2026:11:14:31] ENGINE Bus STARTING
Feb 02 11:14:32 compute-0 ceph-mon[74676]: [02/Feb/2026:11:14:31] ENGINE Serving on http://192.168.122.100:8765
Feb 02 11:14:32 compute-0 ceph-mon[74676]: pgmap v4: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:32 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Feb 02 11:14:32 compute-0 ceph-mon[74676]: mgrmap e30: compute-0.dhyzzj(active, since 2s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:14:32 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 02 11:14:32 compute-0 ceph-mon[74676]: osdmap e70: 3 total, 3 up, 3 in
Feb 02 11:14:32 compute-0 ceph-mon[74676]: [02/Feb/2026:11:14:31] ENGINE Serving on https://192.168.122.100:7150
Feb 02 11:14:32 compute-0 ceph-mon[74676]: [02/Feb/2026:11:14:31] ENGINE Client ('192.168.122.100', 51448) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 11:14:32 compute-0 ceph-mon[74676]: [02/Feb/2026:11:14:31] ENGINE Bus STARTED
Feb 02 11:14:32 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:32 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:32 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:32 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:32 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614000df0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:14:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:14:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:32 compute-0 podman[101693]: 2026-02-02 11:14:32.675856478 +0000 UTC m=+0.253790988 container exec_died ff8f27cea151e399f6eadb5452ca669e448a98d1831552766c3153de82cdcaf5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb 02 11:14:32 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:14:32 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb 02 11:14:32 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb 02 11:14:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:32 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001ac0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:32 compute-0 podman[101807]: 2026-02-02 11:14:32.978499836 +0000 UTC m=+0.046940679 container exec 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:33 compute-0 podman[101807]: 2026-02-02 11:14:33.010952067 +0000 UTC m=+0.079392880 container exec_died 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:33 compute-0 sudo[101074]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:14:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:33 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001ac0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:33.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Feb 02 11:14:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Feb 02 11:14:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:14:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:33.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:14:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:14:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb 02 11:14:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:14:33 compute-0 sudo[101852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:33 compute-0 sudo[101852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:33 compute-0 sudo[101852]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:33 compute-0 ceph-mon[74676]: 11.13 scrub starts
Feb 02 11:14:33 compute-0 ceph-mon[74676]: 11.13 scrub ok
Feb 02 11:14:33 compute-0 ceph-mon[74676]: 12.0 scrub starts
Feb 02 11:14:33 compute-0 ceph-mon[74676]: 12.0 scrub ok
Feb 02 11:14:33 compute-0 ceph-mon[74676]: 11.18 scrub starts
Feb 02 11:14:33 compute-0 ceph-mon[74676]: 11.18 scrub ok
Feb 02 11:14:33 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:33 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:33 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:14:33 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Feb 02 11:14:33 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:33 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:33 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:33 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:33 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:14:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Feb 02 11:14:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 02 11:14:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Feb 02 11:14:33 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Feb 02 11:14:33 compute-0 sudo[101877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:14:33 compute-0 sudo[101877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:33 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.dhyzzj(active, since 4s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:14:33 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Feb 02 11:14:33 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Feb 02 11:14:34 compute-0 sudo[101877]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 sudo[101933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:34 compute-0 sudo[101933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:34 compute-0 sudo[101933]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 sudo[101958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Feb 02 11:14:34 compute-0 sudo[101958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:34 compute-0 sudo[101958]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 11:14:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:14:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:14:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:14:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:34 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4000e00 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111434 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:14:34 compute-0 sudo[102003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 11:14:34 compute-0 sudo[102003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:34 compute-0 sudo[102003]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 sudo[102028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph
Feb 02 11:14:34 compute-0 sudo[102028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:34 compute-0 sudo[102028]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 sudo[102053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:14:34 compute-0 sudo[102053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:34 compute-0 sudo[102053]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Feb 02 11:14:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Feb 02 11:14:34 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Feb 02 11:14:34 compute-0 ceph-mon[74676]: 8.11 scrub starts
Feb 02 11:14:34 compute-0 ceph-mon[74676]: 10.6 scrub starts
Feb 02 11:14:34 compute-0 ceph-mon[74676]: 8.11 scrub ok
Feb 02 11:14:34 compute-0 ceph-mon[74676]: 10.6 scrub ok
Feb 02 11:14:34 compute-0 ceph-mon[74676]: 8.1a scrub starts
Feb 02 11:14:34 compute-0 ceph-mon[74676]: 8.1a scrub ok
Feb 02 11:14:34 compute-0 ceph-mon[74676]: pgmap v6: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 02 11:14:34 compute-0 ceph-mon[74676]: osdmap e71: 3 total, 3 up, 3 in
Feb 02 11:14:34 compute-0 ceph-mon[74676]: mgrmap e31: compute-0.dhyzzj(active, since 4s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:14:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:14:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:14:34 compute-0 sudo[102078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:34 compute-0 sudo[102078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:34 compute-0 sudo[102078]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 sudo[102103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:14:34 compute-0 sudo[102103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:34 compute-0 sudo[102103]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Feb 02 11:14:34 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Feb 02 11:14:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:34 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001d70 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:34 compute-0 sudo[102152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:14:34 compute-0 sudo[102152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:34 compute-0 sudo[102152]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 sudo[102177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new
Feb 02 11:14:34 compute-0 sudo[102177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:34 compute-0 sudo[102177]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 sudo[102202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 02 11:14:34 compute-0 sudo[102202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:34 compute-0 sudo[102202]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:14:34 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:14:35 compute-0 sudo[102227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:14:35 compute-0 sudo[102227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102227]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 sudo[102252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:14:35 compute-0 sudo[102252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102252]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 sudo[102277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:14:35 compute-0 sudo[102277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102277]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:35 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec000d00 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:35 compute-0 sudo[102302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:35 compute-0 sudo[102302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102302]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:35.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:35 compute-0 sudo[102327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:14:35 compute-0 sudo[102327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102327]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 sudo[102375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:14:35 compute-0 sudo[102375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102375]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 sudo[102400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new
Feb 02 11:14:35 compute-0 sudo[102400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102400]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 sudo[102425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf.new /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:14:35 compute-0 sudo[102425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102425]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 sudo[102450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 11:14:35 compute-0 sudo[102450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102450]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v9: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Feb 02 11:14:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Feb 02 11:14:35 compute-0 sudo[102475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph
Feb 02 11:14:35 compute-0 sudo[102475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102475]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:35.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:35 compute-0 sudo[102501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:14:35 compute-0 sudo[102501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102501]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 sudo[102526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:35 compute-0 sudo[102526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102526]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 sudo[102551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:14:35 compute-0 sudo[102551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102551]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Feb 02 11:14:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 02 11:14:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Feb 02 11:14:35 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Feb 02 11:14:35 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73) [1] r=0 lpr=73 pi=[55,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:35 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73) [1] r=0 lpr=73 pi=[55,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:35 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73) [1] r=0 lpr=73 pi=[55,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:35 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73) [1] r=0 lpr=73 pi=[55,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:35 compute-0 ceph-mon[74676]: 8.2 scrub starts
Feb 02 11:14:35 compute-0 ceph-mon[74676]: 8.2 scrub ok
Feb 02 11:14:35 compute-0 ceph-mon[74676]: 12.1f scrub starts
Feb 02 11:14:35 compute-0 ceph-mon[74676]: 12.1f scrub ok
Feb 02 11:14:35 compute-0 ceph-mon[74676]: 8.1e scrub starts
Feb 02 11:14:35 compute-0 ceph-mon[74676]: 8.1e scrub ok
Feb 02 11:14:35 compute-0 ceph-mon[74676]: Updating compute-0:/etc/ceph/ceph.conf
Feb 02 11:14:35 compute-0 ceph-mon[74676]: Updating compute-1:/etc/ceph/ceph.conf
Feb 02 11:14:35 compute-0 ceph-mon[74676]: Updating compute-2:/etc/ceph/ceph.conf
Feb 02 11:14:35 compute-0 ceph-mon[74676]: osdmap e72: 3 total, 3 up, 3 in
Feb 02 11:14:35 compute-0 ceph-mon[74676]: Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:14:35 compute-0 ceph-mon[74676]: Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:14:35 compute-0 ceph-mon[74676]: Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.conf
Feb 02 11:14:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Feb 02 11:14:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 02 11:14:35 compute-0 sudo[102599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:14:35 compute-0 sudo[102599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102599]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.736074) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030875736117, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 943, "num_deletes": 251, "total_data_size": 2911754, "memory_usage": 3016184, "flush_reason": "Manual Compaction"}
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030875756813, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 2829845, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7167, "largest_seqno": 8108, "table_properties": {"data_size": 2824705, "index_size": 2468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13831, "raw_average_key_size": 21, "raw_value_size": 2813300, "raw_average_value_size": 4395, "num_data_blocks": 107, "num_entries": 640, "num_filter_entries": 640, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030856, "oldest_key_time": 1770030856, "file_creation_time": 1770030875, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 20932 microseconds, and 7231 cpu microseconds.
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.756997) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 2829845 bytes OK
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.757050) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.758457) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.758544) EVENT_LOG_v1 {"time_micros": 1770030875758532, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.758585) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2906546, prev total WAL file size 2906546, number of live WAL files 2.
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.760013) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(2763KB)], [20(11MB)]
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030875760112, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 14770594, "oldest_snapshot_seqno": -1}
Feb 02 11:14:35 compute-0 sudo[102624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new
Feb 02 11:14:35 compute-0 sudo[102624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102624]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 sudo[102649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 sudo[102649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Feb 02 11:14:35 compute-0 sudo[102649]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3211 keys, 13407577 bytes, temperature: kUnknown
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030875860034, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 13407577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13381993, "index_size": 16450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 83139, "raw_average_key_size": 25, "raw_value_size": 13318667, "raw_average_value_size": 4147, "num_data_blocks": 717, "num_entries": 3211, "num_filter_entries": 3211, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770030875, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.860495) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13407577 bytes
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.861554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.5 rd, 133.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.4 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(10.0) write-amplify(4.7) OK, records in: 3743, records dropped: 532 output_compression: NoCompression
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.861576) EVENT_LOG_v1 {"time_micros": 1770030875861566, "job": 6, "event": "compaction_finished", "compaction_time_micros": 100164, "compaction_time_cpu_micros": 22030, "output_level": 6, "num_output_files": 1, "total_output_size": 13407577, "num_input_records": 3743, "num_output_records": 3211, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030875862240, "job": 6, "event": "table_file_deletion", "file_number": 22}
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030875863281, "job": 6, "event": "table_file_deletion", "file_number": 20}
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.759895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.863449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.863457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.863459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.863461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:14:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:14:35.863462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:14:35 compute-0 sudo[102674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:14:35 compute-0 sudo[102674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102674]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 sudo[102699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config
Feb 02 11:14:35 compute-0 sudo[102699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102699]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:35 compute-0 sudo[102724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:14:35 compute-0 sudo[102724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:35 compute-0 sudo[102724]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:36 compute-0 sudo[102749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:36 compute-0 sudo[102749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:36 compute-0 sudo[102749]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:36 compute-0 sudo[102774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:14:36 compute-0 sudo[102774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:36 compute-0 sudo[102774]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:36 compute-0 sudo[102822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:14:36 compute-0 sudo[102822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:36 compute-0 sudo[102822]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:36 compute-0 sudo[102847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new
Feb 02 11:14:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:14:36 compute-0 sudo[102847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:36 compute-0 sudo[102847]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:14:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 sudo[102872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1d33f80b-d6ca-501c-bac7-184379b89279/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring.new /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:14:36 compute-0 sudo[102872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:36 compute-0 sudo[102872]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:14:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:36 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180029b0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:14:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:14:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Feb 02 11:14:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:36 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001920 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:36 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:14:36 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Feb 02 11:14:36 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Feb 02 11:14:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Feb 02 11:14:36 compute-0 ceph-mon[74676]: 8.9 scrub starts
Feb 02 11:14:36 compute-0 ceph-mon[74676]: 8.9 scrub ok
Feb 02 11:14:36 compute-0 ceph-mon[74676]: 10.1d scrub starts
Feb 02 11:14:36 compute-0 ceph-mon[74676]: 10.1d scrub ok
Feb 02 11:14:36 compute-0 ceph-mon[74676]: 8.1d scrub starts
Feb 02 11:14:36 compute-0 ceph-mon[74676]: 8.1d scrub ok
Feb 02 11:14:36 compute-0 ceph-mon[74676]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:14:36 compute-0 ceph-mon[74676]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:14:36 compute-0 ceph-mon[74676]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb 02 11:14:36 compute-0 ceph-mon[74676]: pgmap v9: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:36 compute-0 ceph-mon[74676]: osdmap e73: 3 total, 3 up, 3 in
Feb 02 11:14:36 compute-0 ceph-mon[74676]: 11.a scrub starts
Feb 02 11:14:36 compute-0 ceph-mon[74676]: 11.a scrub ok
Feb 02 11:14:36 compute-0 ceph-mon[74676]: Updating compute-1:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:14:36 compute-0 ceph-mon[74676]: Updating compute-0:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:14:36 compute-0 ceph-mon[74676]: Updating compute-2:/var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/config/ceph.client.admin.keyring
Feb 02 11:14:36 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:36 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:14:36] "GET /metrics HTTP/1.1" 200 46584 "" "Prometheus/2.51.0"
Feb 02 11:14:37 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:14:36] "GET /metrics HTTP/1.1" 200 46584 "" "Prometheus/2.51.0"
Feb 02 11:14:37 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 74 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[55,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:37 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 74 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[55,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:37 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 74 pg[9.e( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[55,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:37 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 74 pg[9.e( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[55,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:37 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 74 pg[9.6( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[55,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:37 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 74 pg[9.6( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[55,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:37 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 74 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[55,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:37 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 74 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[55,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:37 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Feb 02 11:14:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:14:37 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:14:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:14:37 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:14:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:37 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:37 compute-0 sudo[102898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:37 compute-0 sudo[102898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:37 compute-0 sudo[102898]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:37 compute-0 sudo[102923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:14:37 compute-0 sudo[102923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:37 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001d70 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:37.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v12: 353 pgs: 4 remapped+peering, 349 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 11 op/s
Feb 02 11:14:37 compute-0 podman[102987]: 2026-02-02 11:14:37.466388727 +0000 UTC m=+0.036385493 container create 0938f2795a47b51ad355a6f5f2d4e40431cf5a7978d5316d11220de0098da2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hamilton, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:14:37 compute-0 systemd[1]: Started libpod-conmon-0938f2795a47b51ad355a6f5f2d4e40431cf5a7978d5316d11220de0098da2cd.scope.
Feb 02 11:14:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:37 compute-0 podman[102987]: 2026-02-02 11:14:37.527368619 +0000 UTC m=+0.097365405 container init 0938f2795a47b51ad355a6f5f2d4e40431cf5a7978d5316d11220de0098da2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hamilton, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:14:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:37.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:37 compute-0 podman[102987]: 2026-02-02 11:14:37.532619166 +0000 UTC m=+0.102615932 container start 0938f2795a47b51ad355a6f5f2d4e40431cf5a7978d5316d11220de0098da2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:14:37 compute-0 sad_hamilton[103005]: 167 167
Feb 02 11:14:37 compute-0 podman[102987]: 2026-02-02 11:14:37.536427793 +0000 UTC m=+0.106424589 container attach 0938f2795a47b51ad355a6f5f2d4e40431cf5a7978d5316d11220de0098da2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hamilton, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:14:37 compute-0 systemd[1]: libpod-0938f2795a47b51ad355a6f5f2d4e40431cf5a7978d5316d11220de0098da2cd.scope: Deactivated successfully.
Feb 02 11:14:37 compute-0 podman[102987]: 2026-02-02 11:14:37.538679546 +0000 UTC m=+0.108676322 container died 0938f2795a47b51ad355a6f5f2d4e40431cf5a7978d5316d11220de0098da2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hamilton, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:14:37 compute-0 podman[102987]: 2026-02-02 11:14:37.450414298 +0000 UTC m=+0.020411094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ba8961450df019f25a1b0bc59c6ce3c5035ba191a1c35525fd07aefcc56b99c-merged.mount: Deactivated successfully.
Feb 02 11:14:37 compute-0 podman[102987]: 2026-02-02 11:14:37.574913334 +0000 UTC m=+0.144910110 container remove 0938f2795a47b51ad355a6f5f2d4e40431cf5a7978d5316d11220de0098da2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:14:37 compute-0 systemd[1]: libpod-conmon-0938f2795a47b51ad355a6f5f2d4e40431cf5a7978d5316d11220de0098da2cd.scope: Deactivated successfully.
Feb 02 11:14:37 compute-0 podman[103029]: 2026-02-02 11:14:37.696385655 +0000 UTC m=+0.044599463 container create 1af286f8b90e89703119eafd230539366887b6e16e31a63a8aae4f8009856a31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:14:37 compute-0 systemd[1]: Started libpod-conmon-1af286f8b90e89703119eafd230539366887b6e16e31a63a8aae4f8009856a31.scope.
Feb 02 11:14:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74595600caa6c0c1655aa7caf5c1c96b6d52d5546579ef70d3fad86785b2258c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:37 compute-0 podman[103029]: 2026-02-02 11:14:37.67767721 +0000 UTC m=+0.025891028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74595600caa6c0c1655aa7caf5c1c96b6d52d5546579ef70d3fad86785b2258c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74595600caa6c0c1655aa7caf5c1c96b6d52d5546579ef70d3fad86785b2258c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74595600caa6c0c1655aa7caf5c1c96b6d52d5546579ef70d3fad86785b2258c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74595600caa6c0c1655aa7caf5c1c96b6d52d5546579ef70d3fad86785b2258c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:37 compute-0 podman[103029]: 2026-02-02 11:14:37.793612725 +0000 UTC m=+0.141826553 container init 1af286f8b90e89703119eafd230539366887b6e16e31a63a8aae4f8009856a31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wozniak, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:14:37 compute-0 podman[103029]: 2026-02-02 11:14:37.800304593 +0000 UTC m=+0.148518401 container start 1af286f8b90e89703119eafd230539366887b6e16e31a63a8aae4f8009856a31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:14:37 compute-0 podman[103029]: 2026-02-02 11:14:37.803488472 +0000 UTC m=+0.151702280 container attach 1af286f8b90e89703119eafd230539366887b6e16e31a63a8aae4f8009856a31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb 02 11:14:37 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Feb 02 11:14:37 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Feb 02 11:14:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Feb 02 11:14:37 compute-0 ceph-mon[74676]: 12.1b scrub starts
Feb 02 11:14:37 compute-0 ceph-mon[74676]: 12.1b scrub ok
Feb 02 11:14:37 compute-0 ceph-mon[74676]: 11.1f scrub starts
Feb 02 11:14:37 compute-0 ceph-mon[74676]: 11.1f scrub ok
Feb 02 11:14:37 compute-0 ceph-mon[74676]: 8.a scrub starts
Feb 02 11:14:37 compute-0 ceph-mon[74676]: 8.a scrub ok
Feb 02 11:14:37 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:37 compute-0 ceph-mon[74676]: 12.16 scrub starts
Feb 02 11:14:37 compute-0 ceph-mon[74676]: 12.16 scrub ok
Feb 02 11:14:37 compute-0 ceph-mon[74676]: osdmap e74: 3 total, 3 up, 3 in
Feb 02 11:14:37 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:37 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:14:37 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:14:37 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:37 compute-0 ceph-mon[74676]: 11.10 scrub starts
Feb 02 11:14:37 compute-0 ceph-mon[74676]: 11.10 scrub ok
Feb 02 11:14:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Feb 02 11:14:38 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Feb 02 11:14:38 compute-0 busy_wozniak[103045]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:14:38 compute-0 busy_wozniak[103045]: --> All data devices are unavailable
Feb 02 11:14:38 compute-0 systemd[1]: libpod-1af286f8b90e89703119eafd230539366887b6e16e31a63a8aae4f8009856a31.scope: Deactivated successfully.
Feb 02 11:14:38 compute-0 podman[103029]: 2026-02-02 11:14:38.088161596 +0000 UTC m=+0.436375404 container died 1af286f8b90e89703119eafd230539366887b6e16e31a63a8aae4f8009856a31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wozniak, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-74595600caa6c0c1655aa7caf5c1c96b6d52d5546579ef70d3fad86785b2258c-merged.mount: Deactivated successfully.
Feb 02 11:14:38 compute-0 podman[103029]: 2026-02-02 11:14:38.12675561 +0000 UTC m=+0.474969418 container remove 1af286f8b90e89703119eafd230539366887b6e16e31a63a8aae4f8009856a31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wozniak, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:14:38 compute-0 systemd[1]: libpod-conmon-1af286f8b90e89703119eafd230539366887b6e16e31a63a8aae4f8009856a31.scope: Deactivated successfully.
Feb 02 11:14:38 compute-0 sudo[102923]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:38 compute-0 sudo[103076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:38 compute-0 sudo[103076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:38 compute-0 sudo[103076]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:38 compute-0 sshd-session[103070]: Accepted publickey for zuul from 192.168.122.30 port 48254 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:14:38 compute-0 systemd-logind[793]: New session 38 of user zuul.
Feb 02 11:14:38 compute-0 systemd[1]: Started Session 38 of User zuul.
Feb 02 11:14:38 compute-0 sshd-session[103070]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:14:38 compute-0 sudo[103102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:14:38 compute-0 sudo[103102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:38 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec001820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:38 compute-0 podman[103218]: 2026-02-02 11:14:38.660916988 +0000 UTC m=+0.037855094 container create 011dc92baec2da53ab8f08a4445e0c03c165b79056f8f7b6ac3dea0e72eb49ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mayer, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:14:38 compute-0 systemd[1]: Started libpod-conmon-011dc92baec2da53ab8f08a4445e0c03c165b79056f8f7b6ac3dea0e72eb49ad.scope.
Feb 02 11:14:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:38 compute-0 podman[103218]: 2026-02-02 11:14:38.642353817 +0000 UTC m=+0.019291933 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:38 compute-0 podman[103218]: 2026-02-02 11:14:38.740310158 +0000 UTC m=+0.117248264 container init 011dc92baec2da53ab8f08a4445e0c03c165b79056f8f7b6ac3dea0e72eb49ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mayer, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:14:38 compute-0 podman[103218]: 2026-02-02 11:14:38.747327325 +0000 UTC m=+0.124265401 container start 011dc92baec2da53ab8f08a4445e0c03c165b79056f8f7b6ac3dea0e72eb49ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:14:38 compute-0 podman[103218]: 2026-02-02 11:14:38.750831223 +0000 UTC m=+0.127769309 container attach 011dc92baec2da53ab8f08a4445e0c03c165b79056f8f7b6ac3dea0e72eb49ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:14:38 compute-0 systemd[1]: libpod-011dc92baec2da53ab8f08a4445e0c03c165b79056f8f7b6ac3dea0e72eb49ad.scope: Deactivated successfully.
Feb 02 11:14:38 compute-0 beautiful_mayer[103251]: 167 167
Feb 02 11:14:38 compute-0 conmon[103251]: conmon 011dc92baec2da53ab8f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-011dc92baec2da53ab8f08a4445e0c03c165b79056f8f7b6ac3dea0e72eb49ad.scope/container/memory.events
Feb 02 11:14:38 compute-0 podman[103218]: 2026-02-02 11:14:38.753719834 +0000 UTC m=+0.130657930 container died 011dc92baec2da53ab8f08a4445e0c03c165b79056f8f7b6ac3dea0e72eb49ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mayer, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-24e15406722bb2fb75ca3866eed3a6e5fec61c1bc7b23b5c11696b3efa7f303f-merged.mount: Deactivated successfully.
Feb 02 11:14:38 compute-0 podman[103218]: 2026-02-02 11:14:38.788406358 +0000 UTC m=+0.165344444 container remove 011dc92baec2da53ab8f08a4445e0c03c165b79056f8f7b6ac3dea0e72eb49ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:14:38 compute-0 systemd[1]: libpod-conmon-011dc92baec2da53ab8f08a4445e0c03c165b79056f8f7b6ac3dea0e72eb49ad.scope: Deactivated successfully.
Feb 02 11:14:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:38 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001920 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:38 compute-0 podman[103337]: 2026-02-02 11:14:38.913765619 +0000 UTC m=+0.036673181 container create db7dd3cb9cfc62584a3aa6a9475d316f3ebb7177c52d98ba77f658701af224b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:14:38 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Feb 02 11:14:38 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Feb 02 11:14:38 compute-0 systemd[1]: Started libpod-conmon-db7dd3cb9cfc62584a3aa6a9475d316f3ebb7177c52d98ba77f658701af224b1.scope.
Feb 02 11:14:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b4d016c651ba115121afbb96cd2c2eea1bd9f8a5ce641754d13f08db09dccc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b4d016c651ba115121afbb96cd2c2eea1bd9f8a5ce641754d13f08db09dccc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b4d016c651ba115121afbb96cd2c2eea1bd9f8a5ce641754d13f08db09dccc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b4d016c651ba115121afbb96cd2c2eea1bd9f8a5ce641754d13f08db09dccc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:38 compute-0 podman[103337]: 2026-02-02 11:14:38.89991785 +0000 UTC m=+0.022825432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:39 compute-0 podman[103337]: 2026-02-02 11:14:39.005436913 +0000 UTC m=+0.128344475 container init db7dd3cb9cfc62584a3aa6a9475d316f3ebb7177c52d98ba77f658701af224b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb 02 11:14:39 compute-0 podman[103337]: 2026-02-02 11:14:39.010863105 +0000 UTC m=+0.133770667 container start db7dd3cb9cfc62584a3aa6a9475d316f3ebb7177c52d98ba77f658701af224b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:14:39 compute-0 ceph-mon[74676]: pgmap v12: 353 pgs: 4 remapped+peering, 349 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 11 op/s
Feb 02 11:14:39 compute-0 ceph-mon[74676]: 11.16 scrub starts
Feb 02 11:14:39 compute-0 ceph-mon[74676]: 11.16 scrub ok
Feb 02 11:14:39 compute-0 ceph-mon[74676]: 10.7 scrub starts
Feb 02 11:14:39 compute-0 ceph-mon[74676]: 10.7 scrub ok
Feb 02 11:14:39 compute-0 ceph-mon[74676]: osdmap e75: 3 total, 3 up, 3 in
Feb 02 11:14:39 compute-0 ceph-mon[74676]: 8.13 scrub starts
Feb 02 11:14:39 compute-0 ceph-mon[74676]: 8.13 scrub ok
Feb 02 11:14:39 compute-0 podman[103337]: 2026-02-02 11:14:39.014010993 +0000 UTC m=+0.136918555 container attach db7dd3cb9cfc62584a3aa6a9475d316f3ebb7177c52d98ba77f658701af224b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb 02 11:14:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Feb 02 11:14:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Feb 02 11:14:39 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 76 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:39 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 76 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:39 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Feb 02 11:14:39 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 76 pg[9.e( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:39 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 76 pg[9.6( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:39 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 76 pg[9.6( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:39 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 76 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=5 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:39 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 76 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=5 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:39 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 76 pg[9.e( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:39 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001920 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:39 compute-0 python3.9[103368]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:14:39 compute-0 magical_blackburn[103375]: {
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:     "1": [
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:         {
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "devices": [
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "/dev/loop3"
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             ],
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "lv_name": "ceph_lv0",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "lv_size": "21470642176",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "name": "ceph_lv0",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "tags": {
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.cluster_name": "ceph",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.crush_device_class": "",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.encrypted": "0",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.osd_id": "1",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.type": "block",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.vdo": "0",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:                 "ceph.with_tpm": "0"
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             },
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "type": "block",
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:             "vg_name": "ceph_vg0"
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:         }
Feb 02 11:14:39 compute-0 magical_blackburn[103375]:     ]
Feb 02 11:14:39 compute-0 magical_blackburn[103375]: }
Feb 02 11:14:39 compute-0 systemd[1]: libpod-db7dd3cb9cfc62584a3aa6a9475d316f3ebb7177c52d98ba77f658701af224b1.scope: Deactivated successfully.
Feb 02 11:14:39 compute-0 podman[103337]: 2026-02-02 11:14:39.323427242 +0000 UTC m=+0.446334804 container died db7dd3cb9cfc62584a3aa6a9475d316f3ebb7177c52d98ba77f658701af224b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3b4d016c651ba115121afbb96cd2c2eea1bd9f8a5ce641754d13f08db09dccc-merged.mount: Deactivated successfully.
Feb 02 11:14:39 compute-0 podman[103337]: 2026-02-02 11:14:39.360865973 +0000 UTC m=+0.483773525 container remove db7dd3cb9cfc62584a3aa6a9475d316f3ebb7177c52d98ba77f658701af224b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:14:39 compute-0 systemd[1]: libpod-conmon-db7dd3cb9cfc62584a3aa6a9475d316f3ebb7177c52d98ba77f658701af224b1.scope: Deactivated successfully.
Feb 02 11:14:39 compute-0 sudo[103102]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:39 compute-0 sudo[103406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v15: 353 pgs: 4 remapped+peering, 349 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 11 op/s
Feb 02 11:14:39 compute-0 sudo[103406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:39 compute-0 sudo[103406]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:39 compute-0 sudo[103439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:14:39 compute-0 sudo[103439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:39.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:39 compute-0 podman[103550]: 2026-02-02 11:14:39.840654056 +0000 UTC m=+0.030175608 container create bc17c8d29e343dd51da602a93f17f13cd020a9d4a2331507e3158871a2e5b3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:14:39 compute-0 systemd[1]: Started libpod-conmon-bc17c8d29e343dd51da602a93f17f13cd020a9d4a2331507e3158871a2e5b3e4.scope.
Feb 02 11:14:39 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:39 compute-0 podman[103550]: 2026-02-02 11:14:39.902524163 +0000 UTC m=+0.092045705 container init bc17c8d29e343dd51da602a93f17f13cd020a9d4a2331507e3158871a2e5b3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:14:39 compute-0 podman[103550]: 2026-02-02 11:14:39.909687024 +0000 UTC m=+0.099208546 container start bc17c8d29e343dd51da602a93f17f13cd020a9d4a2331507e3158871a2e5b3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_booth, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:14:39 compute-0 podman[103550]: 2026-02-02 11:14:39.912774971 +0000 UTC m=+0.102296483 container attach bc17c8d29e343dd51da602a93f17f13cd020a9d4a2331507e3158871a2e5b3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:14:39 compute-0 vibrant_booth[103567]: 167 167
Feb 02 11:14:39 compute-0 systemd[1]: libpod-bc17c8d29e343dd51da602a93f17f13cd020a9d4a2331507e3158871a2e5b3e4.scope: Deactivated successfully.
Feb 02 11:14:39 compute-0 podman[103550]: 2026-02-02 11:14:39.917041181 +0000 UTC m=+0.106562703 container died bc17c8d29e343dd51da602a93f17f13cd020a9d4a2331507e3158871a2e5b3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_booth, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:14:39 compute-0 podman[103550]: 2026-02-02 11:14:39.827680542 +0000 UTC m=+0.017202094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-caddf4c3be90936bf972ea89ecaadee5a124f7b3a26224dd890d1f9baef11bf7-merged.mount: Deactivated successfully.
Feb 02 11:14:39 compute-0 podman[103550]: 2026-02-02 11:14:39.948876205 +0000 UTC m=+0.138397727 container remove bc17c8d29e343dd51da602a93f17f13cd020a9d4a2331507e3158871a2e5b3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_booth, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:14:39 compute-0 systemd[1]: libpod-conmon-bc17c8d29e343dd51da602a93f17f13cd020a9d4a2331507e3158871a2e5b3e4.scope: Deactivated successfully.
Feb 02 11:14:39 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Feb 02 11:14:39 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Feb 02 11:14:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Feb 02 11:14:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Feb 02 11:14:40 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Feb 02 11:14:40 compute-0 ceph-mon[74676]: 8.5 scrub starts
Feb 02 11:14:40 compute-0 ceph-mon[74676]: 8.5 scrub ok
Feb 02 11:14:40 compute-0 ceph-mon[74676]: 12.14 scrub starts
Feb 02 11:14:40 compute-0 ceph-mon[74676]: 12.14 scrub ok
Feb 02 11:14:40 compute-0 ceph-mon[74676]: osdmap e76: 3 total, 3 up, 3 in
Feb 02 11:14:40 compute-0 ceph-mon[74676]: 11.11 scrub starts
Feb 02 11:14:40 compute-0 ceph-mon[74676]: 11.11 scrub ok
Feb 02 11:14:40 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 77 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:40 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 77 pg[9.e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:40 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 77 pg[9.6( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:40 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 77 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76) [1] r=0 lpr=76 pi=[55,76)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:14:40 compute-0 podman[103635]: 2026-02-02 11:14:40.066446566 +0000 UTC m=+0.043420980 container create 920219a183f52b5024e38883b6a17a6d3b16b8b2797124e0531a849dac04bd78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:14:40 compute-0 systemd[1]: Started libpod-conmon-920219a183f52b5024e38883b6a17a6d3b16b8b2797124e0531a849dac04bd78.scope.
Feb 02 11:14:40 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7069e0d1f24f80a48c407b45330ef4638513410ae7adc1efbe7c5a8a6d9f49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7069e0d1f24f80a48c407b45330ef4638513410ae7adc1efbe7c5a8a6d9f49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7069e0d1f24f80a48c407b45330ef4638513410ae7adc1efbe7c5a8a6d9f49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7069e0d1f24f80a48c407b45330ef4638513410ae7adc1efbe7c5a8a6d9f49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:40 compute-0 podman[103635]: 2026-02-02 11:14:40.140183067 +0000 UTC m=+0.117157511 container init 920219a183f52b5024e38883b6a17a6d3b16b8b2797124e0531a849dac04bd78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:14:40 compute-0 podman[103635]: 2026-02-02 11:14:40.049369117 +0000 UTC m=+0.026343551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:40 compute-0 podman[103635]: 2026-02-02 11:14:40.146998188 +0000 UTC m=+0.123972612 container start 920219a183f52b5024e38883b6a17a6d3b16b8b2797124e0531a849dac04bd78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:14:40 compute-0 podman[103635]: 2026-02-02 11:14:40.150038904 +0000 UTC m=+0.127013328 container attach 920219a183f52b5024e38883b6a17a6d3b16b8b2797124e0531a849dac04bd78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:14:40 compute-0 sudo[103773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdrosvuftpvduuxgoakonhxkzbfyvgys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030880.0242193-51-177516362017383/AnsiballZ_command.py'
Feb 02 11:14:40 compute-0 sudo[103773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:14:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:40 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001d70 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:40 compute-0 python3.9[103779]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:14:40 compute-0 lvm[103839]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:14:40 compute-0 lvm[103839]: VG ceph_vg0 finished
Feb 02 11:14:40 compute-0 kind_turing[103683]: {}
Feb 02 11:14:40 compute-0 systemd[1]: libpod-920219a183f52b5024e38883b6a17a6d3b16b8b2797124e0531a849dac04bd78.scope: Deactivated successfully.
Feb 02 11:14:40 compute-0 podman[103635]: 2026-02-02 11:14:40.818934047 +0000 UTC m=+0.795908461 container died 920219a183f52b5024e38883b6a17a6d3b16b8b2797124e0531a849dac04bd78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:14:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:40 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec001820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee7069e0d1f24f80a48c407b45330ef4638513410ae7adc1efbe7c5a8a6d9f49-merged.mount: Deactivated successfully.
Feb 02 11:14:40 compute-0 podman[103635]: 2026-02-02 11:14:40.859839475 +0000 UTC m=+0.836813889 container remove 920219a183f52b5024e38883b6a17a6d3b16b8b2797124e0531a849dac04bd78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:14:40 compute-0 systemd[1]: libpod-conmon-920219a183f52b5024e38883b6a17a6d3b16b8b2797124e0531a849dac04bd78.scope: Deactivated successfully.
Feb 02 11:14:40 compute-0 sudo[103439]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:40 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Feb 02 11:14:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:40 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Feb 02 11:14:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Feb 02 11:14:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:40 compute-0 sudo[103857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:14:40 compute-0 sudo[103856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:14:40 compute-0 sudo[103857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:40 compute-0 sudo[103856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:40 compute-0 sudo[103857]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:40 compute-0 sudo[103856]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:41 compute-0 ceph-mon[74676]: pgmap v15: 353 pgs: 4 remapped+peering, 349 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 11 op/s
Feb 02 11:14:41 compute-0 ceph-mon[74676]: 8.3 deep-scrub starts
Feb 02 11:14:41 compute-0 ceph-mon[74676]: 8.3 deep-scrub ok
Feb 02 11:14:41 compute-0 ceph-mon[74676]: 12.1 scrub starts
Feb 02 11:14:41 compute-0 ceph-mon[74676]: 12.1 scrub ok
Feb 02 11:14:41 compute-0 ceph-mon[74676]: osdmap e77: 3 total, 3 up, 3 in
Feb 02 11:14:41 compute-0 ceph-mon[74676]: 10.13 scrub starts
Feb 02 11:14:41 compute-0 ceph-mon[74676]: 10.13 scrub ok
Feb 02 11:14:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:41 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Feb 02 11:14:41 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Feb 02 11:14:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 02 11:14:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:14:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb 02 11:14:41 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:14:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:41 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:41 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb 02 11:14:41 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb 02 11:14:41 compute-0 sudo[103908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:41 compute-0 sudo[103908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:41 compute-0 sudo[103908]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:41 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180029b0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111441 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:14:41 compute-0 sudo[103933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:41 compute-0 sudo[103933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v17: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 182 B/s, 10 objects/s recovering
Feb 02 11:14:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Feb 02 11:14:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Feb 02 11:14:41 compute-0 podman[103976]: 2026-02-02 11:14:41.465842602 +0000 UTC m=+0.038903463 container create 5916fad4b9bf1715fdb725221577b5823e67c09d219db498fa4e5b21dd124cc2 (image=quay.io/ceph/ceph:v19, name=dreamy_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:14:41 compute-0 systemd[1]: Started libpod-conmon-5916fad4b9bf1715fdb725221577b5823e67c09d219db498fa4e5b21dd124cc2.scope.
Feb 02 11:14:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:41 compute-0 podman[103976]: 2026-02-02 11:14:41.528964724 +0000 UTC m=+0.102025605 container init 5916fad4b9bf1715fdb725221577b5823e67c09d219db498fa4e5b21dd124cc2 (image=quay.io/ceph/ceph:v19, name=dreamy_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:14:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:41 compute-0 podman[103976]: 2026-02-02 11:14:41.533537533 +0000 UTC m=+0.106598394 container start 5916fad4b9bf1715fdb725221577b5823e67c09d219db498fa4e5b21dd124cc2 (image=quay.io/ceph/ceph:v19, name=dreamy_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Feb 02 11:14:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:41.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:41 compute-0 podman[103976]: 2026-02-02 11:14:41.536514026 +0000 UTC m=+0.109574918 container attach 5916fad4b9bf1715fdb725221577b5823e67c09d219db498fa4e5b21dd124cc2 (image=quay.io/ceph/ceph:v19, name=dreamy_haslett, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:14:41 compute-0 dreamy_haslett[103993]: 167 167
Feb 02 11:14:41 compute-0 systemd[1]: libpod-5916fad4b9bf1715fdb725221577b5823e67c09d219db498fa4e5b21dd124cc2.scope: Deactivated successfully.
Feb 02 11:14:41 compute-0 podman[103976]: 2026-02-02 11:14:41.537652568 +0000 UTC m=+0.110713429 container died 5916fad4b9bf1715fdb725221577b5823e67c09d219db498fa4e5b21dd124cc2 (image=quay.io/ceph/ceph:v19, name=dreamy_haslett, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:14:41 compute-0 podman[103976]: 2026-02-02 11:14:41.447391134 +0000 UTC m=+0.020452015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a7f3dd977e42215ed1872d79df43eb73dc26192cebc7d46134c7a90a64fb7e8-merged.mount: Deactivated successfully.
Feb 02 11:14:41 compute-0 podman[103976]: 2026-02-02 11:14:41.572178088 +0000 UTC m=+0.145238939 container remove 5916fad4b9bf1715fdb725221577b5823e67c09d219db498fa4e5b21dd124cc2 (image=quay.io/ceph/ceph:v19, name=dreamy_haslett, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:14:41 compute-0 systemd[1]: libpod-conmon-5916fad4b9bf1715fdb725221577b5823e67c09d219db498fa4e5b21dd124cc2.scope: Deactivated successfully.
Feb 02 11:14:41 compute-0 sudo[103933]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:41 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.dhyzzj (monmap changed)...
Feb 02 11:14:41 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.dhyzzj (monmap changed)...
Feb 02 11:14:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.dhyzzj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 02 11:14:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dhyzzj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb 02 11:14:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 11:14:41 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:14:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:41 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:41 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.dhyzzj on compute-0
Feb 02 11:14:41 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.dhyzzj on compute-0
Feb 02 11:14:41 compute-0 sudo[104010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:41 compute-0 sudo[104010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:41 compute-0 sudo[104010]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:41 compute-0 sudo[104035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:41 compute-0 sudo[104035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:41 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb 02 11:14:41 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb 02 11:14:42 compute-0 podman[104077]: 2026-02-02 11:14:42.010487415 +0000 UTC m=+0.038200024 container create 66df78b7e7e28c5ee45458f946a173e8b8dad7193f89544bd028091ebcc06696 (image=quay.io/ceph/ceph:v19, name=friendly_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:14:42 compute-0 systemd[1]: Started libpod-conmon-66df78b7e7e28c5ee45458f946a173e8b8dad7193f89544bd028091ebcc06696.scope.
Feb 02 11:14:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Feb 02 11:14:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 02 11:14:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Feb 02 11:14:42 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Feb 02 11:14:42 compute-0 podman[104077]: 2026-02-02 11:14:42.072411194 +0000 UTC m=+0.100123833 container init 66df78b7e7e28c5ee45458f946a173e8b8dad7193f89544bd028091ebcc06696 (image=quay.io/ceph/ceph:v19, name=friendly_brown, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:14:42 compute-0 ceph-mon[74676]: 11.17 scrub starts
Feb 02 11:14:42 compute-0 ceph-mon[74676]: 11.17 scrub ok
Feb 02 11:14:42 compute-0 ceph-mon[74676]: 8.12 scrub starts
Feb 02 11:14:42 compute-0 ceph-mon[74676]: 8.12 scrub ok
Feb 02 11:14:42 compute-0 ceph-mon[74676]: Reconfiguring mon.compute-0 (monmap changed)...
Feb 02 11:14:42 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mon[74676]: Reconfiguring daemon mon.compute-0 on compute-0
Feb 02 11:14:42 compute-0 ceph-mon[74676]: 12.12 scrub starts
Feb 02 11:14:42 compute-0 ceph-mon[74676]: 12.12 scrub ok
Feb 02 11:14:42 compute-0 ceph-mon[74676]: pgmap v17: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 182 B/s, 10 objects/s recovering
Feb 02 11:14:42 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:42 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:42 compute-0 ceph-mon[74676]: Reconfiguring mgr.compute-0.dhyzzj (monmap changed)...
Feb 02 11:14:42 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dhyzzj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mon[74676]: Reconfiguring daemon mgr.compute-0.dhyzzj on compute-0
Feb 02 11:14:42 compute-0 podman[104077]: 2026-02-02 11:14:42.078782793 +0000 UTC m=+0.106495402 container start 66df78b7e7e28c5ee45458f946a173e8b8dad7193f89544bd028091ebcc06696 (image=quay.io/ceph/ceph:v19, name=friendly_brown, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:14:42 compute-0 podman[104077]: 2026-02-02 11:14:42.082756524 +0000 UTC m=+0.110469163 container attach 66df78b7e7e28c5ee45458f946a173e8b8dad7193f89544bd028091ebcc06696 (image=quay.io/ceph/ceph:v19, name=friendly_brown, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:14:42 compute-0 friendly_brown[104093]: 167 167
Feb 02 11:14:42 compute-0 systemd[1]: libpod-66df78b7e7e28c5ee45458f946a173e8b8dad7193f89544bd028091ebcc06696.scope: Deactivated successfully.
Feb 02 11:14:42 compute-0 podman[104077]: 2026-02-02 11:14:42.085885852 +0000 UTC m=+0.113598461 container died 66df78b7e7e28c5ee45458f946a173e8b8dad7193f89544bd028091ebcc06696 (image=quay.io/ceph/ceph:v19, name=friendly_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:14:42 compute-0 podman[104077]: 2026-02-02 11:14:41.995263377 +0000 UTC m=+0.022976006 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb 02 11:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-03d0484b98be20ee72214b10eeeefa69a218c982a14a3696b3f3bdcd6ae9102e-merged.mount: Deactivated successfully.
Feb 02 11:14:42 compute-0 podman[104077]: 2026-02-02 11:14:42.121019149 +0000 UTC m=+0.148731748 container remove 66df78b7e7e28c5ee45458f946a173e8b8dad7193f89544bd028091ebcc06696 (image=quay.io/ceph/ceph:v19, name=friendly_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:14:42 compute-0 systemd[1]: libpod-conmon-66df78b7e7e28c5ee45458f946a173e8b8dad7193f89544bd028091ebcc06696.scope: Deactivated successfully.
Feb 02 11:14:42 compute-0 sudo[104035]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:42 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Feb 02 11:14:42 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Feb 02 11:14:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 02 11:14:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Feb 02 11:14:42 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Feb 02 11:14:42 compute-0 sudo[104110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:42 compute-0 sudo[104110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:42 compute-0 sudo[104110]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:42 compute-0 sudo[104135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:42 compute-0 sudo[104135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:42 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001920 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:42 compute-0 podman[104176]: 2026-02-02 11:14:42.575360997 +0000 UTC m=+0.035564770 container create d3bfa0357235c8eb0f0e84b20dbfff088c4e130a4e92903c14b5d0d4623e5108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:14:42 compute-0 systemd[1]: Started libpod-conmon-d3bfa0357235c8eb0f0e84b20dbfff088c4e130a4e92903c14b5d0d4623e5108.scope.
Feb 02 11:14:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:42 compute-0 podman[104176]: 2026-02-02 11:14:42.631345589 +0000 UTC m=+0.091549382 container init d3bfa0357235c8eb0f0e84b20dbfff088c4e130a4e92903c14b5d0d4623e5108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:14:42 compute-0 podman[104176]: 2026-02-02 11:14:42.635433994 +0000 UTC m=+0.095637767 container start d3bfa0357235c8eb0f0e84b20dbfff088c4e130a4e92903c14b5d0d4623e5108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:14:42 compute-0 affectionate_bell[104193]: 167 167
Feb 02 11:14:42 compute-0 podman[104176]: 2026-02-02 11:14:42.638480249 +0000 UTC m=+0.098684052 container attach d3bfa0357235c8eb0f0e84b20dbfff088c4e130a4e92903c14b5d0d4623e5108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:14:42 compute-0 systemd[1]: libpod-d3bfa0357235c8eb0f0e84b20dbfff088c4e130a4e92903c14b5d0d4623e5108.scope: Deactivated successfully.
Feb 02 11:14:42 compute-0 podman[104176]: 2026-02-02 11:14:42.639698903 +0000 UTC m=+0.099902676 container died d3bfa0357235c8eb0f0e84b20dbfff088c4e130a4e92903c14b5d0d4623e5108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b05585baab61c6d2e7f2ec618ee4c26f3e817764f30c44895b058acbe0180a7-merged.mount: Deactivated successfully.
Feb 02 11:14:42 compute-0 podman[104176]: 2026-02-02 11:14:42.560403747 +0000 UTC m=+0.020607540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:42 compute-0 podman[104176]: 2026-02-02 11:14:42.672108463 +0000 UTC m=+0.132312256 container remove d3bfa0357235c8eb0f0e84b20dbfff088c4e130a4e92903c14b5d0d4623e5108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:14:42 compute-0 systemd[1]: libpod-conmon-d3bfa0357235c8eb0f0e84b20dbfff088c4e130a4e92903c14b5d0d4623e5108.scope: Deactivated successfully.
Feb 02 11:14:42 compute-0 sudo[104135]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:42 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Feb 02 11:14:42 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Feb 02 11:14:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb 02 11:14:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:42 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Feb 02 11:14:42 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Feb 02 11:14:42 compute-0 sudo[104209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:42 compute-0 sudo[104209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:42 compute-0 sudo[104209]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:42 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001920 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:42 compute-0 sudo[104235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:42 compute-0 sudo[104235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:42 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb 02 11:14:42 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb 02 11:14:43 compute-0 ceph-mon[74676]: 8.c scrub starts
Feb 02 11:14:43 compute-0 ceph-mon[74676]: 8.c scrub ok
Feb 02 11:14:43 compute-0 ceph-mon[74676]: 11.1c scrub starts
Feb 02 11:14:43 compute-0 ceph-mon[74676]: 11.1c scrub ok
Feb 02 11:14:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 02 11:14:43 compute-0 ceph-mon[74676]: osdmap e78: 3 total, 3 up, 3 in
Feb 02 11:14:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:43 compute-0 ceph-mon[74676]: Reconfiguring crash.compute-0 (monmap changed)...
Feb 02 11:14:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb 02 11:14:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:43 compute-0 ceph-mon[74676]: Reconfiguring daemon crash.compute-0 on compute-0
Feb 02 11:14:43 compute-0 ceph-mon[74676]: 12.c scrub starts
Feb 02 11:14:43 compute-0 ceph-mon[74676]: 12.c scrub ok
Feb 02 11:14:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:43 compute-0 ceph-mon[74676]: Reconfiguring osd.1 (monmap changed)...
Feb 02 11:14:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Feb 02 11:14:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:43 compute-0 ceph-mon[74676]: Reconfiguring daemon osd.1 on compute-0
Feb 02 11:14:43 compute-0 podman[104275]: 2026-02-02 11:14:43.113107517 +0000 UTC m=+0.041449005 container create e4b31526e39e10b2846894fb44c7e546cc11459b7bbb8e8a0e345afdd899aaa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_black, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:14:43 compute-0 systemd[1]: Started libpod-conmon-e4b31526e39e10b2846894fb44c7e546cc11459b7bbb8e8a0e345afdd899aaa7.scope.
Feb 02 11:14:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:43 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec001820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:43 compute-0 podman[104275]: 2026-02-02 11:14:43.096414298 +0000 UTC m=+0.024755806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:43 compute-0 podman[104275]: 2026-02-02 11:14:43.196588651 +0000 UTC m=+0.124930159 container init e4b31526e39e10b2846894fb44c7e546cc11459b7bbb8e8a0e345afdd899aaa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_black, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 11:14:43 compute-0 podman[104275]: 2026-02-02 11:14:43.201358715 +0000 UTC m=+0.129700193 container start e4b31526e39e10b2846894fb44c7e546cc11459b7bbb8e8a0e345afdd899aaa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_black, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:14:43 compute-0 zen_black[104291]: 167 167
Feb 02 11:14:43 compute-0 systemd[1]: libpod-e4b31526e39e10b2846894fb44c7e546cc11459b7bbb8e8a0e345afdd899aaa7.scope: Deactivated successfully.
Feb 02 11:14:43 compute-0 podman[104275]: 2026-02-02 11:14:43.207207249 +0000 UTC m=+0.135548777 container attach e4b31526e39e10b2846894fb44c7e546cc11459b7bbb8e8a0e345afdd899aaa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_black, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb 02 11:14:43 compute-0 podman[104275]: 2026-02-02 11:14:43.20757927 +0000 UTC m=+0.135920758 container died e4b31526e39e10b2846894fb44c7e546cc11459b7bbb8e8a0e345afdd899aaa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_black, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-318856047cb273faf3af447bb6487a4282c66a63d09e21cbe4f770191b7c0af4-merged.mount: Deactivated successfully.
Feb 02 11:14:43 compute-0 podman[104275]: 2026-02-02 11:14:43.24889865 +0000 UTC m=+0.177240138 container remove e4b31526e39e10b2846894fb44c7e546cc11459b7bbb8e8a0e345afdd899aaa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:14:43 compute-0 systemd[1]: libpod-conmon-e4b31526e39e10b2846894fb44c7e546cc11459b7bbb8e8a0e345afdd899aaa7.scope: Deactivated successfully.
Feb 02 11:14:43 compute-0 sudo[104235]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:43 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Feb 02 11:14:43 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Feb 02 11:14:43 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Feb 02 11:14:43 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Feb 02 11:14:43 compute-0 sudo[104317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:43 compute-0 sudo[104317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:43 compute-0 sudo[104317]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v19: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 149 B/s, 8 objects/s recovering
Feb 02 11:14:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Feb 02 11:14:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Feb 02 11:14:43 compute-0 sudo[104342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:43 compute-0 sudo[104342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:43.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:43 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:43 compute-0 podman[104415]: 2026-02-02 11:14:43.898252054 +0000 UTC m=+0.045540410 container died c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-23469e64d81a1aaed67c0843508277e963ab529774b210155668103668696b2f-merged.mount: Deactivated successfully.
Feb 02 11:14:43 compute-0 podman[104415]: 2026-02-02 11:14:43.927970349 +0000 UTC m=+0.075258705 container remove c94edc7af472a9d1fb6a8b876b3b1514f7ae9eaa77903793ee94483c46008cc2 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:43 compute-0 bash[104415]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0
Feb 02 11:14:43 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Feb 02 11:14:43 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Feb 02 11:14:43 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Feb 02 11:14:44 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@node-exporter.compute-0.service: Failed with result 'exit-code'.
Feb 02 11:14:44 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:44 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@node-exporter.compute-0.service: Consumed 1.877s CPU time.
Feb 02 11:14:44 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:44 compute-0 podman[104518]: 2026-02-02 11:14:44.308881055 +0000 UTC m=+0.087238401 container create 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:44 compute-0 podman[104518]: 2026-02-02 11:14:44.244877318 +0000 UTC m=+0.023234664 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Feb 02 11:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/369186a09c5df042ae7af264c5c88586cf1acdb95b3c154dc1231c540bc17c8a/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:44 compute-0 podman[104518]: 2026-02-02 11:14:44.359963689 +0000 UTC m=+0.138321035 container init 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:44 compute-0 podman[104518]: 2026-02-02 11:14:44.364386423 +0000 UTC m=+0.142743769 container start 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:44 compute-0 bash[104518]: 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.371Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.371Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.372Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.372Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.372Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.372Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=arp
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=bcache
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=bonding
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=btrfs
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=conntrack
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=cpu
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=cpufreq
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=diskstats
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=dmi
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=edac
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=entropy
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=fibrechannel
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=filefd
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=filesystem
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=hwmon
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=infiniband
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=ipvs
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=loadavg
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=mdadm
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=meminfo
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=netclass
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=netdev
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=netstat
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=nfs
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=nfsd
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=nvme
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=os
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=pressure
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=rapl
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=schedstat
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=selinux
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=sockstat
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=softnet
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=stat
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=tapestats
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=textfile
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=thermal_zone
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=time
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=udp_queues
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=uname
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=vmstat
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=xfs
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=node_exporter.go:117 level=info collector=zfs
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0[104534]: ts=2026-02-02T11:14:44.373Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Feb 02 11:14:44 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:44 compute-0 ceph-mon[74676]: 10.f scrub starts
Feb 02 11:14:44 compute-0 ceph-mon[74676]: 10.f scrub ok
Feb 02 11:14:44 compute-0 ceph-mon[74676]: 11.1d scrub starts
Feb 02 11:14:44 compute-0 ceph-mon[74676]: 11.1d scrub ok
Feb 02 11:14:44 compute-0 ceph-mon[74676]: 10.8 scrub starts
Feb 02 11:14:44 compute-0 ceph-mon[74676]: 10.8 scrub ok
Feb 02 11:14:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:44 compute-0 ceph-mon[74676]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Feb 02 11:14:44 compute-0 ceph-mon[74676]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Feb 02 11:14:44 compute-0 ceph-mon[74676]: pgmap v19: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 149 B/s, 8 objects/s recovering
Feb 02 11:14:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Feb 02 11:14:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Feb 02 11:14:44 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 02 11:14:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Feb 02 11:14:44 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Feb 02 11:14:44 compute-0 sudo[104342]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:44 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:44 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:44 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Feb 02 11:14:44 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Feb 02 11:14:44 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Feb 02 11:14:44 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:44 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180029b0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:44 compute-0 sudo[104545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:44 compute-0 sudo[104545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:44 compute-0 sudo[104545]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:14:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:14:44 compute-0 sudo[104570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:44 compute-0 sudo[104570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:44 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec001820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:44 compute-0 podman[104617]: 2026-02-02 11:14:44.879548079 +0000 UTC m=+0.035503128 volume create fdeb64e44c9754abce603b50ff90315fc38aeeccaa4c16c1d4c805da778ed899
Feb 02 11:14:44 compute-0 podman[104617]: 2026-02-02 11:14:44.887340788 +0000 UTC m=+0.043295837 container create d2f371c6e51eb52a49a30663a85e26ddedcf8b60602a64cc29e4ef59a67f3325 (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_black, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:44 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb 02 11:14:44 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb 02 11:14:44 compute-0 systemd[1]: Started libpod-conmon-d2f371c6e51eb52a49a30663a85e26ddedcf8b60602a64cc29e4ef59a67f3325.scope.
Feb 02 11:14:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8fb55534149dd45d6f140886c28c787241aeb359578bd85819fdefbfe3606f4/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:44 compute-0 podman[104617]: 2026-02-02 11:14:44.866191544 +0000 UTC m=+0.022146593 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb 02 11:14:44 compute-0 podman[104617]: 2026-02-02 11:14:44.970867884 +0000 UTC m=+0.126822963 container init d2f371c6e51eb52a49a30663a85e26ddedcf8b60602a64cc29e4ef59a67f3325 (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_black, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:44 compute-0 podman[104617]: 2026-02-02 11:14:44.977273464 +0000 UTC m=+0.133228513 container start d2f371c6e51eb52a49a30663a85e26ddedcf8b60602a64cc29e4ef59a67f3325 (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_black, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:44 compute-0 reverent_black[104633]: 65534 65534
Feb 02 11:14:44 compute-0 systemd[1]: libpod-d2f371c6e51eb52a49a30663a85e26ddedcf8b60602a64cc29e4ef59a67f3325.scope: Deactivated successfully.
Feb 02 11:14:44 compute-0 podman[104617]: 2026-02-02 11:14:44.980956817 +0000 UTC m=+0.136911866 container attach d2f371c6e51eb52a49a30663a85e26ddedcf8b60602a64cc29e4ef59a67f3325 (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_black, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:44 compute-0 podman[104617]: 2026-02-02 11:14:44.981172253 +0000 UTC m=+0.137127312 container died d2f371c6e51eb52a49a30663a85e26ddedcf8b60602a64cc29e4ef59a67f3325 (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_black, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8fb55534149dd45d6f140886c28c787241aeb359578bd85819fdefbfe3606f4-merged.mount: Deactivated successfully.
Feb 02 11:14:45 compute-0 podman[104617]: 2026-02-02 11:14:45.020799166 +0000 UTC m=+0.176754205 container remove d2f371c6e51eb52a49a30663a85e26ddedcf8b60602a64cc29e4ef59a67f3325 (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_black, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 podman[104617]: 2026-02-02 11:14:45.023377128 +0000 UTC m=+0.179332177 volume remove fdeb64e44c9754abce603b50ff90315fc38aeeccaa4c16c1d4c805da778ed899
Feb 02 11:14:45 compute-0 systemd[1]: libpod-conmon-d2f371c6e51eb52a49a30663a85e26ddedcf8b60602a64cc29e4ef59a67f3325.scope: Deactivated successfully.
Feb 02 11:14:45 compute-0 podman[104650]: 2026-02-02 11:14:45.078028503 +0000 UTC m=+0.037341200 volume create c5e237c48c394c16a26968f2040f8b0c729548152204dd3b8192c974030052f0
Feb 02 11:14:45 compute-0 podman[104650]: 2026-02-02 11:14:45.088681232 +0000 UTC m=+0.047993909 container create 1fd25a2d8e18c7662c1fa07a7cfd401e6bb019c8146b578959fe28199b806c4c (image=quay.io/prometheus/alertmanager:v0.25.0, name=funny_elbakyan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 systemd[1]: Started libpod-conmon-1fd25a2d8e18c7662c1fa07a7cfd401e6bb019c8146b578959fe28199b806c4c.scope.
Feb 02 11:14:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e697076fe63220d17e6966fd2265042623ef2fc17aab638ef2af17bfd7ae5002/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:45 compute-0 podman[104650]: 2026-02-02 11:14:45.063727621 +0000 UTC m=+0.023040328 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb 02 11:14:45 compute-0 podman[104650]: 2026-02-02 11:14:45.168221715 +0000 UTC m=+0.127534422 container init 1fd25a2d8e18c7662c1fa07a7cfd401e6bb019c8146b578959fe28199b806c4c (image=quay.io/prometheus/alertmanager:v0.25.0, name=funny_elbakyan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 podman[104650]: 2026-02-02 11:14:45.172549757 +0000 UTC m=+0.131862444 container start 1fd25a2d8e18c7662c1fa07a7cfd401e6bb019c8146b578959fe28199b806c4c (image=quay.io/prometheus/alertmanager:v0.25.0, name=funny_elbakyan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 funny_elbakyan[104666]: 65534 65534
Feb 02 11:14:45 compute-0 podman[104650]: 2026-02-02 11:14:45.175950502 +0000 UTC m=+0.135263189 container attach 1fd25a2d8e18c7662c1fa07a7cfd401e6bb019c8146b578959fe28199b806c4c (image=quay.io/prometheus/alertmanager:v0.25.0, name=funny_elbakyan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 podman[104650]: 2026-02-02 11:14:45.176122517 +0000 UTC m=+0.135435204 container died 1fd25a2d8e18c7662c1fa07a7cfd401e6bb019c8146b578959fe28199b806c4c (image=quay.io/prometheus/alertmanager:v0.25.0, name=funny_elbakyan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 systemd[1]: libpod-1fd25a2d8e18c7662c1fa07a7cfd401e6bb019c8146b578959fe28199b806c4c.scope: Deactivated successfully.
Feb 02 11:14:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:45 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001920 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e697076fe63220d17e6966fd2265042623ef2fc17aab638ef2af17bfd7ae5002-merged.mount: Deactivated successfully.
Feb 02 11:14:45 compute-0 podman[104650]: 2026-02-02 11:14:45.21219341 +0000 UTC m=+0.171506097 container remove 1fd25a2d8e18c7662c1fa07a7cfd401e6bb019c8146b578959fe28199b806c4c (image=quay.io/prometheus/alertmanager:v0.25.0, name=funny_elbakyan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 podman[104650]: 2026-02-02 11:14:45.216502591 +0000 UTC m=+0.175815278 volume remove c5e237c48c394c16a26968f2040f8b0c729548152204dd3b8192c974030052f0
Feb 02 11:14:45 compute-0 systemd[1]: libpod-conmon-1fd25a2d8e18c7662c1fa07a7cfd401e6bb019c8146b578959fe28199b806c4c.scope: Deactivated successfully.
Feb 02 11:14:45 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:45 compute-0 ceph-mon[74676]: 12.9 scrub starts
Feb 02 11:14:45 compute-0 ceph-mon[74676]: 12.9 scrub ok
Feb 02 11:14:45 compute-0 ceph-mon[74676]: 8.1b scrub starts
Feb 02 11:14:45 compute-0 ceph-mon[74676]: 8.1b scrub ok
Feb 02 11:14:45 compute-0 ceph-mon[74676]: 12.6 scrub starts
Feb 02 11:14:45 compute-0 ceph-mon[74676]: 12.6 scrub ok
Feb 02 11:14:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 02 11:14:45 compute-0 ceph-mon[74676]: osdmap e79: 3 total, 3 up, 3 in
Feb 02 11:14:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:45 compute-0 ceph-mon[74676]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Feb 02 11:14:45 compute-0 ceph-mon[74676]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Feb 02 11:14:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:14:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Feb 02 11:14:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Feb 02 11:14:45 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Feb 02 11:14:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[98728]: ts=2026-02-02T11:14:45.441Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Feb 02 11:14:45 compute-0 podman[104713]: 2026-02-02 11:14:45.451034856 +0000 UTC m=+0.045949151 container died 336d9d28e0eb1f2f2dd97a2b6a4292670e7abd45d078ce92b83d7aa30a81bc97 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v22: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 150 B/s, 8 objects/s recovering
Feb 02 11:14:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Feb 02 11:14:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Feb 02 11:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-49d53f2c4c49feb3afab235387431b3343aefa4a0bea61a6733342d115b68a27-merged.mount: Deactivated successfully.
Feb 02 11:14:45 compute-0 podman[104713]: 2026-02-02 11:14:45.480835943 +0000 UTC m=+0.075750228 container remove 336d9d28e0eb1f2f2dd97a2b6a4292670e7abd45d078ce92b83d7aa30a81bc97 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 podman[104713]: 2026-02-02 11:14:45.486585274 +0000 UTC m=+0.081499559 volume remove 7f59056574df3e3f184de1cd524408228bb23fbe309ce9b32c2d0011f43516fb
Feb 02 11:14:45 compute-0 bash[104713]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0
Feb 02 11:14:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:45.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:45 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@alertmanager.compute-0.service: Deactivated successfully.
Feb 02 11:14:45 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:45 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:45 compute-0 podman[104817]: 2026-02-02 11:14:45.819617156 +0000 UTC m=+0.039575512 volume create 4514cf1ef789d44838e2b8b83b13a6036e737a6116b1baf43eb3eaad94b3ad22
Feb 02 11:14:45 compute-0 podman[104817]: 2026-02-02 11:14:45.827344753 +0000 UTC m=+0.047303109 container create ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968ddce76c68b699c89caaaccf77b061082945b33fd89a83023d087b2e8486a9/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968ddce76c68b699c89caaaccf77b061082945b33fd89a83023d087b2e8486a9/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:45 compute-0 podman[104817]: 2026-02-02 11:14:45.885156626 +0000 UTC m=+0.105115002 container init ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 podman[104817]: 2026-02-02 11:14:45.888971823 +0000 UTC m=+0.108930179 container start ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:45 compute-0 bash[104817]: ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f
Feb 02 11:14:45 compute-0 podman[104817]: 2026-02-02 11:14:45.80230873 +0000 UTC m=+0.022267136 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb 02 11:14:45 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:14:45.923Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Feb 02 11:14:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:14:45.923Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Feb 02 11:14:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:14:45.932Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Feb 02 11:14:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:14:45.934Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Feb 02 11:14:45 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Feb 02 11:14:45 compute-0 sudo[104570]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:45 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Feb 02 11:14:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:14:45.969Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Feb 02 11:14:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:14:45.969Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Feb 02 11:14:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:14:45.973Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Feb 02 11:14:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:14:45.973Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Feb 02 11:14:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:45 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Feb 02 11:14:45 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Feb 02 11:14:46 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Feb 02 11:14:46 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Feb 02 11:14:46 compute-0 sudo[104854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:46 compute-0 sudo[104854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:46 compute-0 sudo[104854]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:46 compute-0 sudo[104879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 1d33f80b-d6ca-501c-bac7-184379b89279
Feb 02 11:14:46 compute-0 sudo[104879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Feb 02 11:14:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 02 11:14:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Feb 02 11:14:46 compute-0 ceph-mon[74676]: 12.17 scrub starts
Feb 02 11:14:46 compute-0 ceph-mon[74676]: 12.17 scrub ok
Feb 02 11:14:46 compute-0 ceph-mon[74676]: 11.1a scrub starts
Feb 02 11:14:46 compute-0 ceph-mon[74676]: 11.1a scrub ok
Feb 02 11:14:46 compute-0 ceph-mon[74676]: 10.2 scrub starts
Feb 02 11:14:46 compute-0 ceph-mon[74676]: 10.2 scrub ok
Feb 02 11:14:46 compute-0 ceph-mon[74676]: osdmap e80: 3 total, 3 up, 3 in
Feb 02 11:14:46 compute-0 ceph-mon[74676]: pgmap v22: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 150 B/s, 8 objects/s recovering
Feb 02 11:14:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Feb 02 11:14:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:46 compute-0 ceph-mon[74676]: Reconfiguring grafana.compute-0 (dependencies changed)...
Feb 02 11:14:46 compute-0 ceph-mon[74676]: Reconfiguring daemon grafana.compute-0 on compute-0
Feb 02 11:14:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Feb 02 11:14:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:46 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001920 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:46 compute-0 podman[104921]: 2026-02-02 11:14:46.582263301 +0000 UTC m=+0.037055311 container create 55722b09b5f825d716994d4f41dd56ffe8369d5aef64267c227a29d6ebf86ec3 (image=quay.io/ceph/grafana:10.4.0, name=serene_thompson, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 systemd[1]: Started libpod-conmon-55722b09b5f825d716994d4f41dd56ffe8369d5aef64267c227a29d6ebf86ec3.scope.
Feb 02 11:14:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:46 compute-0 podman[104921]: 2026-02-02 11:14:46.565907142 +0000 UTC m=+0.020699182 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb 02 11:14:46 compute-0 podman[104921]: 2026-02-02 11:14:46.667134595 +0000 UTC m=+0.121926625 container init 55722b09b5f825d716994d4f41dd56ffe8369d5aef64267c227a29d6ebf86ec3 (image=quay.io/ceph/grafana:10.4.0, name=serene_thompson, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 podman[104921]: 2026-02-02 11:14:46.6748225 +0000 UTC m=+0.129614510 container start 55722b09b5f825d716994d4f41dd56ffe8369d5aef64267c227a29d6ebf86ec3 (image=quay.io/ceph/grafana:10.4.0, name=serene_thompson, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 serene_thompson[104938]: 472 0
Feb 02 11:14:46 compute-0 systemd[1]: libpod-55722b09b5f825d716994d4f41dd56ffe8369d5aef64267c227a29d6ebf86ec3.scope: Deactivated successfully.
Feb 02 11:14:46 compute-0 podman[104921]: 2026-02-02 11:14:46.680475419 +0000 UTC m=+0.135267479 container attach 55722b09b5f825d716994d4f41dd56ffe8369d5aef64267c227a29d6ebf86ec3 (image=quay.io/ceph/grafana:10.4.0, name=serene_thompson, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 podman[104921]: 2026-02-02 11:14:46.680986934 +0000 UTC m=+0.135778944 container died 55722b09b5f825d716994d4f41dd56ffe8369d5aef64267c227a29d6ebf86ec3 (image=quay.io/ceph/grafana:10.4.0, name=serene_thompson, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f850ab13f32ff5b06e9e7ee44928b6d7768f0bd4e381e152764d7de33ef7e7f-merged.mount: Deactivated successfully.
Feb 02 11:14:46 compute-0 podman[104921]: 2026-02-02 11:14:46.718010043 +0000 UTC m=+0.172802053 container remove 55722b09b5f825d716994d4f41dd56ffe8369d5aef64267c227a29d6ebf86ec3 (image=quay.io/ceph/grafana:10.4.0, name=serene_thompson, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 systemd[1]: libpod-conmon-55722b09b5f825d716994d4f41dd56ffe8369d5aef64267c227a29d6ebf86ec3.scope: Deactivated successfully.
Feb 02 11:14:46 compute-0 podman[104957]: 2026-02-02 11:14:46.778388339 +0000 UTC m=+0.042520765 container create 4c70bb16f51243117ea20771f867ec8412683cd4a07edb35a3bbc58e1793f191 (image=quay.io/ceph/grafana:10.4.0, name=naughty_boyd, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 systemd[1]: Started libpod-conmon-4c70bb16f51243117ea20771f867ec8412683cd4a07edb35a3bbc58e1793f191.scope.
Feb 02 11:14:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:46 compute-0 podman[104957]: 2026-02-02 11:14:46.833369903 +0000 UTC m=+0.097502379 container init 4c70bb16f51243117ea20771f867ec8412683cd4a07edb35a3bbc58e1793f191 (image=quay.io/ceph/grafana:10.4.0, name=naughty_boyd, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:46 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180029b0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:46 compute-0 podman[104957]: 2026-02-02 11:14:46.840156123 +0000 UTC m=+0.104288539 container start 4c70bb16f51243117ea20771f867ec8412683cd4a07edb35a3bbc58e1793f191 (image=quay.io/ceph/grafana:10.4.0, name=naughty_boyd, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 naughty_boyd[104975]: 472 0
Feb 02 11:14:46 compute-0 systemd[1]: libpod-4c70bb16f51243117ea20771f867ec8412683cd4a07edb35a3bbc58e1793f191.scope: Deactivated successfully.
Feb 02 11:14:46 compute-0 podman[104957]: 2026-02-02 11:14:46.84432395 +0000 UTC m=+0.108456446 container attach 4c70bb16f51243117ea20771f867ec8412683cd4a07edb35a3bbc58e1793f191 (image=quay.io/ceph/grafana:10.4.0, name=naughty_boyd, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 podman[104957]: 2026-02-02 11:14:46.846092 +0000 UTC m=+0.110224426 container died 4c70bb16f51243117ea20771f867ec8412683cd4a07edb35a3bbc58e1793f191 (image=quay.io/ceph/grafana:10.4.0, name=naughty_boyd, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 podman[104957]: 2026-02-02 11:14:46.759429596 +0000 UTC m=+0.023562042 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb 02 11:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8b41ecdf0da31129e724b8acd9e9c42a12574e12d1b2f0aab71e7c9fadbfb7c-merged.mount: Deactivated successfully.
Feb 02 11:14:46 compute-0 podman[104957]: 2026-02-02 11:14:46.879894509 +0000 UTC m=+0.144026935 container remove 4c70bb16f51243117ea20771f867ec8412683cd4a07edb35a3bbc58e1793f191 (image=quay.io/ceph/grafana:10.4.0, name=naughty_boyd, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:46 compute-0 systemd[1]: libpod-conmon-4c70bb16f51243117ea20771f867ec8412683cd4a07edb35a3bbc58e1793f191.scope: Deactivated successfully.
Feb 02 11:14:46 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb 02 11:14:46 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb 02 11:14:46 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:14:46] "GET /metrics HTTP/1.1" 200 48289 "" "Prometheus/2.51.0"
Feb 02 11:14:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:14:46] "GET /metrics HTTP/1.1" 200 48289 "" "Prometheus/2.51.0"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=server t=2026-02-02T11:14:47.128612103Z level=info msg="Shutdown started" reason="System signal: terminated"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=tracing t=2026-02-02T11:14:47.12886169Z level=info msg="Closing tracing"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=ticker t=2026-02-02T11:14:47.128978263Z level=info msg=stopped last_tick=2026-02-02T11:14:40Z
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=grafana-apiserver t=2026-02-02T11:14:47.129336123Z level=info msg="StorageObjectCountTracker pruner is exiting"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[99358]: logger=sqlstore.transactions t=2026-02-02T11:14:47.139782957Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Feb 02 11:14:47 compute-0 podman[105025]: 2026-02-02 11:14:47.159574132 +0000 UTC m=+0.063367540 container died ff8f27cea151e399f6eadb5452ca669e448a98d1831552766c3153de82cdcaf5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e01b75e263b41c63dd4087ca520b0d089d14344fb0d579238854c8021a10f127-merged.mount: Deactivated successfully.
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:47 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180029b0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:47 compute-0 podman[105025]: 2026-02-02 11:14:47.194842323 +0000 UTC m=+0.098635731 container remove ff8f27cea151e399f6eadb5452ca669e448a98d1831552766c3153de82cdcaf5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:47 compute-0 bash[105025]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0
Feb 02 11:14:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:14:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:47.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:14:47 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@grafana.compute-0.service: Deactivated successfully.
Feb 02 11:14:47 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:47 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@grafana.compute-0.service: Consumed 4.003s CPU time.
Feb 02 11:14:47 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:14:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Feb 02 11:14:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v24: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:47 compute-0 ceph-mon[74676]: 10.1 deep-scrub starts
Feb 02 11:14:47 compute-0 ceph-mon[74676]: 10.1 deep-scrub ok
Feb 02 11:14:47 compute-0 ceph-mon[74676]: 11.1b scrub starts
Feb 02 11:14:47 compute-0 ceph-mon[74676]: 11.1b scrub ok
Feb 02 11:14:47 compute-0 ceph-mon[74676]: 10.5 deep-scrub starts
Feb 02 11:14:47 compute-0 ceph-mon[74676]: 10.5 deep-scrub ok
Feb 02 11:14:47 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 02 11:14:47 compute-0 ceph-mon[74676]: osdmap e81: 3 total, 3 up, 3 in
Feb 02 11:14:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Feb 02 11:14:47 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Feb 02 11:14:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:47.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:47 compute-0 podman[105128]: 2026-02-02 11:14:47.55371446 +0000 UTC m=+0.049430219 container create 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559c35f0aecdfab06c1ca59d982997232faeaec0e27b81e40729fe292aa83cab/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559c35f0aecdfab06c1ca59d982997232faeaec0e27b81e40729fe292aa83cab/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559c35f0aecdfab06c1ca59d982997232faeaec0e27b81e40729fe292aa83cab/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559c35f0aecdfab06c1ca59d982997232faeaec0e27b81e40729fe292aa83cab/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559c35f0aecdfab06c1ca59d982997232faeaec0e27b81e40729fe292aa83cab/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:47 compute-0 podman[105128]: 2026-02-02 11:14:47.531128006 +0000 UTC m=+0.026843845 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb 02 11:14:47 compute-0 podman[105128]: 2026-02-02 11:14:47.629268562 +0000 UTC m=+0.124984351 container init 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:47 compute-0 sudo[103773]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:47 compute-0 podman[105128]: 2026-02-02 11:14:47.639659744 +0000 UTC m=+0.135375503 container start 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:47 compute-0 bash[105128]: 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1
Feb 02 11:14:47 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:14:47 compute-0 sudo[104879]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:47 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Feb 02 11:14:47 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Feb 02 11:14:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 02 11:14:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb 02 11:14:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:47 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Feb 02 11:14:47 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826336046Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-02-02T11:14:47Z
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826622304Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826645974Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826651804Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826657155Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826662065Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826664955Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826667945Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826674225Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826677465Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826680215Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826683295Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826687565Z level=info msg=Target target=[all]
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826693716Z level=info msg="Path Home" path=/usr/share/grafana
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826698036Z level=info msg="Path Data" path=/var/lib/grafana
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826701256Z level=info msg="Path Logs" path=/var/log/grafana
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826703886Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826708316Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=settings t=2026-02-02T11:14:47.826714606Z level=info msg="App mode production"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=sqlstore t=2026-02-02T11:14:47.827000194Z level=info msg="Connecting to DB" dbtype=sqlite3
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=sqlstore t=2026-02-02T11:14:47.827018265Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=migrator t=2026-02-02T11:14:47.829765202Z level=info msg="Starting DB migrations"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=migrator t=2026-02-02T11:14:47.846334907Z level=info msg="migrations completed" performed=0 skipped=547 duration=677.329µs
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=sqlstore t=2026-02-02T11:14:47.847312655Z level=info msg="Created default organization"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=secrets t=2026-02-02T11:14:47.847782578Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=plugin.store t=2026-02-02T11:14:47.868430628Z level=info msg="Loading plugins..."
Feb 02 11:14:47 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=local.finder t=2026-02-02T11:14:47.921827367Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=plugin.store t=2026-02-02T11:14:47.921870498Z level=info msg="Plugins loaded" count=55 duration=53.44059ms
Feb 02 11:14:47 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=query_data t=2026-02-02T11:14:47.924868392Z level=info msg="Query Service initialization"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=live.push_http t=2026-02-02T11:14:47.929379109Z level=info msg="Live Push Gateway initialization"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=ngalert.migration t=2026-02-02T11:14:47.932675192Z level=info msg=Starting
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:14:47.935Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000818224s
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=ngalert.state.manager t=2026-02-02T11:14:47.945351978Z level=info msg="Running in alternative execution of Error/NoData mode"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=infra.usagestats.collector t=2026-02-02T11:14:47.94758265Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=provisioning.datasources t=2026-02-02T11:14:47.949663439Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=provisioning.alerting t=2026-02-02T11:14:47.97106457Z level=info msg="starting to provision alerting"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=provisioning.alerting t=2026-02-02T11:14:47.971098101Z level=info msg="finished to provision alerting"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=ngalert.state.manager t=2026-02-02T11:14:47.971834091Z level=info msg="Warming state cache for startup"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=ngalert.state.manager t=2026-02-02T11:14:47.972598993Z level=info msg="State cache has been initialized" states=0 duration=763.091µs
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=ngalert.multiorg.alertmanager t=2026-02-02T11:14:47.971861842Z level=info msg="Starting MultiOrg Alertmanager"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=ngalert.scheduler t=2026-02-02T11:14:47.972675015Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=ticker t=2026-02-02T11:14:47.972800008Z level=info msg=starting first_tick=2026-02-02T11:14:50Z
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=grafanaStorageLogger t=2026-02-02T11:14:47.97357719Z level=info msg="Storage starting"
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=http.server t=2026-02-02T11:14:47.975248737Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=http.server t=2026-02-02T11:14:47.975772542Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Feb 02 11:14:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=provisioning.dashboard t=2026-02-02T11:14:47.992302616Z level=info msg="starting to provision dashboards"
Feb 02 11:14:47 compute-0 sshd-session[103123]: Connection closed by 192.168.122.30 port 48254
Feb 02 11:14:48 compute-0 sshd-session[103070]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:14:48 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Feb 02 11:14:48 compute-0 systemd[1]: session-38.scope: Consumed 8.168s CPU time.
Feb 02 11:14:48 compute-0 systemd-logind[793]: Session 38 logged out. Waiting for processes to exit.
Feb 02 11:14:48 compute-0 systemd-logind[793]: Removed session 38.
Feb 02 11:14:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=provisioning.dashboard t=2026-02-02T11:14:48.015882788Z level=info msg="finished to provision dashboards"
Feb 02 11:14:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=grafana.update.checker t=2026-02-02T11:14:48.035130589Z level=info msg="Update check succeeded" duration=63.317158ms
Feb 02 11:14:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=plugins.update.checker t=2026-02-02T11:14:48.03802673Z level=info msg="Update check succeeded" duration=64.984255ms
Feb 02 11:14:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=grafana-apiserver t=2026-02-02T11:14:48.18582397Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Feb 02 11:14:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=grafana-apiserver t=2026-02-02T11:14:48.186297823Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:48 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Feb 02 11:14:48 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:48 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Feb 02 11:14:48 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Feb 02 11:14:48 compute-0 ceph-mon[74676]: 12.11 scrub starts
Feb 02 11:14:48 compute-0 ceph-mon[74676]: 12.11 scrub ok
Feb 02 11:14:48 compute-0 ceph-mon[74676]: 11.1e scrub starts
Feb 02 11:14:48 compute-0 ceph-mon[74676]: 11.1e scrub ok
Feb 02 11:14:48 compute-0 ceph-mon[74676]: 10.19 scrub starts
Feb 02 11:14:48 compute-0 ceph-mon[74676]: 10.19 scrub ok
Feb 02 11:14:48 compute-0 ceph-mon[74676]: pgmap v24: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:48 compute-0 ceph-mon[74676]: osdmap e82: 3 total, 3 up, 3 in
Feb 02 11:14:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:48 compute-0 ceph-mon[74676]: Reconfiguring crash.compute-1 (monmap changed)...
Feb 02 11:14:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb 02 11:14:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:48 compute-0 ceph-mon[74676]: Reconfiguring daemon crash.compute-1 on compute-1
Feb 02 11:14:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Feb 02 11:14:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Feb 02 11:14:48 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Feb 02 11:14:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:48 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec0030a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:48 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Feb 02 11:14:48 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Feb 02 11:14:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:48 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:14:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:48 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Feb 02 11:14:48 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Feb 02 11:14:48 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Feb 02 11:14:48 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Feb 02 11:14:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:49 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec0030a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:49.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:14:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:14:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:49 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-1 (unknown last config time)...
Feb 02 11:14:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-1 (unknown last config time)...
Feb 02 11:14:49 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-1 on compute-1
Feb 02 11:14:49 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-1 on compute-1
Feb 02 11:14:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v27: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Feb 02 11:14:49 compute-0 ceph-mon[74676]: 12.13 scrub starts
Feb 02 11:14:49 compute-0 ceph-mon[74676]: 12.13 scrub ok
Feb 02 11:14:49 compute-0 ceph-mon[74676]: 8.4 scrub starts
Feb 02 11:14:49 compute-0 ceph-mon[74676]: 8.4 scrub ok
Feb 02 11:14:49 compute-0 ceph-mon[74676]: 10.18 scrub starts
Feb 02 11:14:49 compute-0 ceph-mon[74676]: 10.18 scrub ok
Feb 02 11:14:49 compute-0 ceph-mon[74676]: Reconfiguring osd.0 (monmap changed)...
Feb 02 11:14:49 compute-0 ceph-mon[74676]: Reconfiguring daemon osd.0 on compute-1
Feb 02 11:14:49 compute-0 ceph-mon[74676]: osdmap e83: 3 total, 3 up, 3 in
Feb 02 11:14:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:49 compute-0 ceph-mon[74676]: Reconfiguring mon.compute-1 (monmap changed)...
Feb 02 11:14:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:14:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:14:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:49 compute-0 ceph-mon[74676]: Reconfiguring daemon mon.compute-1 on compute-1
Feb 02 11:14:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Feb 02 11:14:49 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Feb 02 11:14:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:49.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:49 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Feb 02 11:14:49 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Feb 02 11:14:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:14:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:14:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:50 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Feb 02 11:14:50 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Feb 02 11:14:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 02 11:14:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:14:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb 02 11:14:50 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:14:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:50 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:50 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Feb 02 11:14:50 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Feb 02 11:14:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:50 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:14:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:50 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614009630 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Feb 02 11:14:50 compute-0 ceph-mon[74676]: 8.8 scrub starts
Feb 02 11:14:50 compute-0 ceph-mon[74676]: 12.4 scrub starts
Feb 02 11:14:50 compute-0 ceph-mon[74676]: 8.8 scrub ok
Feb 02 11:14:50 compute-0 ceph-mon[74676]: 12.4 scrub ok
Feb 02 11:14:50 compute-0 ceph-mon[74676]: 12.1c scrub starts
Feb 02 11:14:50 compute-0 ceph-mon[74676]: 12.1c scrub ok
Feb 02 11:14:50 compute-0 ceph-mon[74676]: Reconfiguring node-exporter.compute-1 (unknown last config time)...
Feb 02 11:14:50 compute-0 ceph-mon[74676]: Reconfiguring daemon node-exporter.compute-1 on compute-1
Feb 02 11:14:50 compute-0 ceph-mon[74676]: pgmap v27: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:50 compute-0 ceph-mon[74676]: osdmap e84: 3 total, 3 up, 3 in
Feb 02 11:14:50 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:50 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:50 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb 02 11:14:50 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb 02 11:14:50 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Feb 02 11:14:50 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Feb 02 11:14:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:50 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618004290 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:50 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.f scrub starts
Feb 02 11:14:50 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.f scrub ok
Feb 02 11:14:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:14:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:51 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:14:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 5 objects/s recovering
Feb 02 11:14:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:51 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.zebspe (monmap changed)...
Feb 02 11:14:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.zebspe (monmap changed)...
Feb 02 11:14:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.zebspe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 02 11:14:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zebspe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb 02 11:14:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 11:14:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:14:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:51 compute-0 ceph-mgr[74969]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.zebspe on compute-2
Feb 02 11:14:51 compute-0 ceph-mgr[74969]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.zebspe on compute-2
Feb 02 11:14:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:51.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:51 compute-0 ceph-mon[74676]: 10.4 scrub starts
Feb 02 11:14:51 compute-0 ceph-mon[74676]: 11.7 scrub starts
Feb 02 11:14:51 compute-0 ceph-mon[74676]: 10.4 scrub ok
Feb 02 11:14:51 compute-0 ceph-mon[74676]: 11.7 scrub ok
Feb 02 11:14:51 compute-0 ceph-mon[74676]: 12.19 scrub starts
Feb 02 11:14:51 compute-0 ceph-mon[74676]: 12.19 scrub ok
Feb 02 11:14:51 compute-0 ceph-mon[74676]: Reconfiguring mon.compute-2 (monmap changed)...
Feb 02 11:14:51 compute-0 ceph-mon[74676]: Reconfiguring daemon mon.compute-2 on compute-2
Feb 02 11:14:51 compute-0 ceph-mon[74676]: osdmap e85: 3 total, 3 up, 3 in
Feb 02 11:14:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zebspe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb 02 11:14:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:14:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:51 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb 02 11:14:51 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb 02 11:14:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:14:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:14:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Feb 02 11:14:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Feb 02 11:14:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Feb 02 11:14:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Feb 02 11:14:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Feb 02 11:14:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Feb 02 11:14:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Feb 02 11:14:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: [prometheus INFO root] Restarting engine...
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.error] [02/Feb/2026:11:14:52] ENGINE Bus STOPPING
Feb 02 11:14:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: [02/Feb/2026:11:14:52] ENGINE Bus STOPPING
Feb 02 11:14:52 compute-0 sudo[105193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:52 compute-0 sudo[105193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:52 compute-0 sudo[105193]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:52 compute-0 sudo[105218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:14:52 compute-0 sudo[105218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: [02/Feb/2026:11:14:52] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Feb 02 11:14:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: [02/Feb/2026:11:14:52] ENGINE Bus STOPPED
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.error] [02/Feb/2026:11:14:52] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.error] [02/Feb/2026:11:14:52] ENGINE Bus STOPPED
Feb 02 11:14:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: [02/Feb/2026:11:14:52] ENGINE Bus STARTING
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.error] [02/Feb/2026:11:14:52] ENGINE Bus STARTING
Feb 02 11:14:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:52 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec0030a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: [02/Feb/2026:11:14:52] ENGINE Serving on http://:::9283
Feb 02 11:14:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: [02/Feb/2026:11:14:52] ENGINE Bus STARTED
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.error] [02/Feb/2026:11:14:52] ENGINE Serving on http://:::9283
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.error] [02/Feb/2026:11:14:52] ENGINE Bus STARTED
Feb 02 11:14:52 compute-0 ceph-mgr[74969]: [prometheus INFO root] Engine started.
Feb 02 11:14:52 compute-0 ceph-mon[74676]: 11.f scrub starts
Feb 02 11:14:52 compute-0 ceph-mon[74676]: 11.f scrub ok
Feb 02 11:14:52 compute-0 ceph-mon[74676]: 12.1e scrub starts
Feb 02 11:14:52 compute-0 ceph-mon[74676]: 12.1e scrub ok
Feb 02 11:14:52 compute-0 ceph-mon[74676]: 12.8 scrub starts
Feb 02 11:14:52 compute-0 ceph-mon[74676]: 12.8 scrub ok
Feb 02 11:14:52 compute-0 ceph-mon[74676]: pgmap v30: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 5 objects/s recovering
Feb 02 11:14:52 compute-0 ceph-mon[74676]: Reconfiguring mgr.compute-2.zebspe (monmap changed)...
Feb 02 11:14:52 compute-0 ceph-mon[74676]: Reconfiguring daemon mgr.compute-2.zebspe on compute-2
Feb 02 11:14:52 compute-0 ceph-mon[74676]: 11.5 scrub starts
Feb 02 11:14:52 compute-0 ceph-mon[74676]: 11.5 scrub ok
Feb 02 11:14:52 compute-0 ceph-mon[74676]: 12.3 scrub starts
Feb 02 11:14:52 compute-0 ceph-mon[74676]: 12.3 scrub ok
Feb 02 11:14:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Feb 02 11:14:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Feb 02 11:14:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Feb 02 11:14:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:52 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614009f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:52 compute-0 podman[105330]: 2026-02-02 11:14:52.896558087 +0000 UTC m=+0.062708327 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:14:52 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb 02 11:14:52 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb 02 11:14:52 compute-0 podman[105330]: 2026-02-02 11:14:52.99315911 +0000 UTC m=+0.159309330 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:14:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:53 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618004290 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:14:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:14:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:53 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:14:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:53 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:14:53 compute-0 podman[105467]: 2026-02-02 11:14:53.460583858 +0000 UTC m=+0.053727451 container exec 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v31: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 3 objects/s recovering
Feb 02 11:14:53 compute-0 podman[105467]: 2026-02-02 11:14:53.470118069 +0000 UTC m=+0.063261642 container exec_died 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:14:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:53.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:14:53 compute-0 podman[105542]: 2026-02-02 11:14:53.68910164 +0000 UTC m=+0.053339070 container exec 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:14:53 compute-0 podman[105542]: 2026-02-02 11:14:53.695040663 +0000 UTC m=+0.059278073 container exec_died 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:14:53 compute-0 ceph-mon[74676]: 12.10 scrub starts
Feb 02 11:14:53 compute-0 ceph-mon[74676]: 12.10 scrub ok
Feb 02 11:14:53 compute-0 ceph-mon[74676]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Feb 02 11:14:53 compute-0 ceph-mon[74676]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Feb 02 11:14:53 compute-0 ceph-mon[74676]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Feb 02 11:14:53 compute-0 ceph-mon[74676]: 8.d scrub starts
Feb 02 11:14:53 compute-0 ceph-mon[74676]: 8.d scrub ok
Feb 02 11:14:53 compute-0 ceph-mon[74676]: 11.12 scrub starts
Feb 02 11:14:53 compute-0 ceph-mon[74676]: 11.12 scrub ok
Feb 02 11:14:53 compute-0 podman[105605]: 2026-02-02 11:14:53.873452404 +0000 UTC m=+0.050897614 container exec 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:14:53 compute-0 podman[105605]: 2026-02-02 11:14:53.882215824 +0000 UTC m=+0.059661014 container exec_died 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:14:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:14:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:53 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Feb 02 11:14:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:14:53 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Feb 02 11:14:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:54 compute-0 podman[105673]: 2026-02-02 11:14:54.060433579 +0000 UTC m=+0.042980097 container exec 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vendor=Red Hat, Inc., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, architecture=x86_64)
Feb 02 11:14:54 compute-0 podman[105673]: 2026-02-02 11:14:54.073231439 +0000 UTC m=+0.055777967 container exec_died 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, description=keepalived for Ceph, name=keepalived, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, build-date=2023-02-22T09:23:20, vcs-type=git, architecture=x86_64)
Feb 02 11:14:54 compute-0 podman[105738]: 2026-02-02 11:14:54.262481887 +0000 UTC m=+0.047587993 container exec ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:54 compute-0 podman[105738]: 2026-02-02 11:14:54.286153515 +0000 UTC m=+0.071259591 container exec_died ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:54 compute-0 podman[105813]: 2026-02-02 11:14:54.455009394 +0000 UTC m=+0.041771603 container exec 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:54 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:54 compute-0 podman[105813]: 2026-02-02 11:14:54.616268226 +0000 UTC m=+0.203030455 container exec_died 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:14:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:54 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec003db0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:54 compute-0 ceph-mon[74676]: 10.15 scrub starts
Feb 02 11:14:54 compute-0 ceph-mon[74676]: 10.15 scrub ok
Feb 02 11:14:54 compute-0 ceph-mon[74676]: pgmap v31: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 3 objects/s recovering
Feb 02 11:14:54 compute-0 ceph-mon[74676]: 12.1d scrub starts
Feb 02 11:14:54 compute-0 ceph-mon[74676]: 12.1d scrub ok
Feb 02 11:14:54 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:54 compute-0 ceph-mon[74676]: 8.14 scrub starts
Feb 02 11:14:54 compute-0 ceph-mon[74676]: 8.14 scrub ok
Feb 02 11:14:54 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:54 compute-0 podman[105922]: 2026-02-02 11:14:54.959309842 +0000 UTC m=+0.054952865 container exec 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:54 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Feb 02 11:14:54 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Feb 02 11:14:54 compute-0 podman[105922]: 2026-02-02 11:14:54.990281199 +0000 UTC m=+0.085924222 container exec_died 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:14:55 compute-0 sudo[105218]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:55 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:55 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:55 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:14:55 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:14:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:14:55 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:14:55 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:14:55 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:14:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:14:55 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:14:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:14:55 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:55 compute-0 sudo[105966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:55 compute-0 sudo[105966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:55 compute-0 sudo[105966]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:55 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614009f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:55.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:55 compute-0 sudo[105991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:14:55 compute-0 sudo[105991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v32: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 3 objects/s recovering
Feb 02 11:14:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:14:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:55.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:55 compute-0 podman[106058]: 2026-02-02 11:14:55.55761161 +0000 UTC m=+0.047688155 container create d51df6c00507ba176a0c4da38a7be080294ab8cde8b56d54027bb924ed605d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:14:55 compute-0 systemd[1]: Started libpod-conmon-d51df6c00507ba176a0c4da38a7be080294ab8cde8b56d54027bb924ed605d3b.scope.
Feb 02 11:14:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:55 compute-0 podman[106058]: 2026-02-02 11:14:55.533206493 +0000 UTC m=+0.023283118 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:55 compute-0 podman[106058]: 2026-02-02 11:14:55.634228197 +0000 UTC m=+0.124304752 container init d51df6c00507ba176a0c4da38a7be080294ab8cde8b56d54027bb924ed605d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_franklin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:14:55 compute-0 podman[106058]: 2026-02-02 11:14:55.641260779 +0000 UTC m=+0.131337334 container start d51df6c00507ba176a0c4da38a7be080294ab8cde8b56d54027bb924ed605d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_franklin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:14:55 compute-0 podman[106058]: 2026-02-02 11:14:55.64496043 +0000 UTC m=+0.135037035 container attach d51df6c00507ba176a0c4da38a7be080294ab8cde8b56d54027bb924ed605d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_franklin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:14:55 compute-0 intelligent_franklin[106074]: 167 167
Feb 02 11:14:55 compute-0 systemd[1]: libpod-d51df6c00507ba176a0c4da38a7be080294ab8cde8b56d54027bb924ed605d3b.scope: Deactivated successfully.
Feb 02 11:14:55 compute-0 podman[106058]: 2026-02-02 11:14:55.661083761 +0000 UTC m=+0.151160326 container died d51df6c00507ba176a0c4da38a7be080294ab8cde8b56d54027bb924ed605d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_franklin, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 11:14:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fac71f3e9734df726c3b9ec12faf14646f7a5a2ab72e6eca661a022700becce7-merged.mount: Deactivated successfully.
Feb 02 11:14:55 compute-0 podman[106058]: 2026-02-02 11:14:55.69576839 +0000 UTC m=+0.185844935 container remove d51df6c00507ba176a0c4da38a7be080294ab8cde8b56d54027bb924ed605d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:14:55 compute-0 systemd[1]: libpod-conmon-d51df6c00507ba176a0c4da38a7be080294ab8cde8b56d54027bb924ed605d3b.scope: Deactivated successfully.
Feb 02 11:14:55 compute-0 podman[106099]: 2026-02-02 11:14:55.818359744 +0000 UTC m=+0.040733185 container create 00511aecb1359df83c6819f6c1d16fb1e10446404bfe059bf583f2a771ec5289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_moser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:14:55 compute-0 systemd[1]: Started libpod-conmon-00511aecb1359df83c6819f6c1d16fb1e10446404bfe059bf583f2a771ec5289.scope.
Feb 02 11:14:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01031b87f899cdb08b75cc59256798779612e3b8ad19d83d6c2fbfc91237972/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01031b87f899cdb08b75cc59256798779612e3b8ad19d83d6c2fbfc91237972/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01031b87f899cdb08b75cc59256798779612e3b8ad19d83d6c2fbfc91237972/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01031b87f899cdb08b75cc59256798779612e3b8ad19d83d6c2fbfc91237972/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01031b87f899cdb08b75cc59256798779612e3b8ad19d83d6c2fbfc91237972/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:55 compute-0 podman[106099]: 2026-02-02 11:14:55.801113672 +0000 UTC m=+0.023487133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:55 compute-0 podman[106099]: 2026-02-02 11:14:55.905419166 +0000 UTC m=+0.127792687 container init 00511aecb1359df83c6819f6c1d16fb1e10446404bfe059bf583f2a771ec5289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_moser, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:14:55 compute-0 podman[106099]: 2026-02-02 11:14:55.913317212 +0000 UTC m=+0.135690703 container start 00511aecb1359df83c6819f6c1d16fb1e10446404bfe059bf583f2a771ec5289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:14:55 compute-0 podman[106099]: 2026-02-02 11:14:55.917175528 +0000 UTC m=+0.139549009 container attach 00511aecb1359df83c6819f6c1d16fb1e10446404bfe059bf583f2a771ec5289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:14:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:14:55.938Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003173836s
Feb 02 11:14:55 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Feb 02 11:14:55 compute-0 ceph-mon[74676]: 10.14 scrub starts
Feb 02 11:14:55 compute-0 ceph-mon[74676]: 10.14 scrub ok
Feb 02 11:14:55 compute-0 ceph-mon[74676]: 12.2 scrub starts
Feb 02 11:14:55 compute-0 ceph-mon[74676]: 12.2 scrub ok
Feb 02 11:14:55 compute-0 ceph-mon[74676]: 11.14 scrub starts
Feb 02 11:14:55 compute-0 ceph-mon[74676]: 11.14 scrub ok
Feb 02 11:14:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:14:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:14:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:14:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:14:55 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Feb 02 11:14:56 compute-0 wizardly_moser[106115]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:14:56 compute-0 wizardly_moser[106115]: --> All data devices are unavailable
Feb 02 11:14:56 compute-0 systemd[1]: libpod-00511aecb1359df83c6819f6c1d16fb1e10446404bfe059bf583f2a771ec5289.scope: Deactivated successfully.
Feb 02 11:14:56 compute-0 podman[106099]: 2026-02-02 11:14:56.249775656 +0000 UTC m=+0.472149167 container died 00511aecb1359df83c6819f6c1d16fb1e10446404bfe059bf583f2a771ec5289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_moser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f01031b87f899cdb08b75cc59256798779612e3b8ad19d83d6c2fbfc91237972-merged.mount: Deactivated successfully.
Feb 02 11:14:56 compute-0 podman[106099]: 2026-02-02 11:14:56.294252413 +0000 UTC m=+0.516625854 container remove 00511aecb1359df83c6819f6c1d16fb1e10446404bfe059bf583f2a771ec5289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_moser, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:14:56 compute-0 systemd[1]: libpod-conmon-00511aecb1359df83c6819f6c1d16fb1e10446404bfe059bf583f2a771ec5289.scope: Deactivated successfully.
Feb 02 11:14:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:56 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:14:56 compute-0 sudo[105991]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:56 compute-0 sudo[106143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:56 compute-0 sudo[106143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:56 compute-0 sudo[106143]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:56 compute-0 sudo[106168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:14:56 compute-0 sudo[106168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:56 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618004290 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:56 compute-0 podman[106234]: 2026-02-02 11:14:56.829914578 +0000 UTC m=+0.043925743 container create ca89e7ea3b519498700d90aaa8fb8679da12793602064ae5d85a1513656291b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:14:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:56 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:56 compute-0 systemd[1]: Started libpod-conmon-ca89e7ea3b519498700d90aaa8fb8679da12793602064ae5d85a1513656291b5.scope.
Feb 02 11:14:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:56 compute-0 podman[106234]: 2026-02-02 11:14:56.809991303 +0000 UTC m=+0.024002488 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:56 compute-0 podman[106234]: 2026-02-02 11:14:56.908765605 +0000 UTC m=+0.122776790 container init ca89e7ea3b519498700d90aaa8fb8679da12793602064ae5d85a1513656291b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:14:56 compute-0 podman[106234]: 2026-02-02 11:14:56.915450868 +0000 UTC m=+0.129462023 container start ca89e7ea3b519498700d90aaa8fb8679da12793602064ae5d85a1513656291b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:14:56 compute-0 jolly_dewdney[106252]: 167 167
Feb 02 11:14:56 compute-0 systemd[1]: libpod-ca89e7ea3b519498700d90aaa8fb8679da12793602064ae5d85a1513656291b5.scope: Deactivated successfully.
Feb 02 11:14:56 compute-0 podman[106234]: 2026-02-02 11:14:56.920299761 +0000 UTC m=+0.134310946 container attach ca89e7ea3b519498700d90aaa8fb8679da12793602064ae5d85a1513656291b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:14:56 compute-0 podman[106234]: 2026-02-02 11:14:56.920933378 +0000 UTC m=+0.134944553 container died ca89e7ea3b519498700d90aaa8fb8679da12793602064ae5d85a1513656291b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a66b8ceaf4d7d8f46a11fbaadd3483d7f8996a442254390ee28011e7a182c482-merged.mount: Deactivated successfully.
Feb 02 11:14:56 compute-0 podman[106234]: 2026-02-02 11:14:56.95717272 +0000 UTC m=+0.171183885 container remove ca89e7ea3b519498700d90aaa8fb8679da12793602064ae5d85a1513656291b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:14:56 compute-0 systemd[1]: libpod-conmon-ca89e7ea3b519498700d90aaa8fb8679da12793602064ae5d85a1513656291b5.scope: Deactivated successfully.
Feb 02 11:14:56 compute-0 ceph-mon[74676]: 12.b scrub starts
Feb 02 11:14:56 compute-0 ceph-mon[74676]: 12.b scrub ok
Feb 02 11:14:56 compute-0 ceph-mon[74676]: pgmap v32: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 3 objects/s recovering
Feb 02 11:14:56 compute-0 ceph-mon[74676]: 10.11 scrub starts
Feb 02 11:14:56 compute-0 ceph-mon[74676]: 10.11 scrub ok
Feb 02 11:14:56 compute-0 ceph-mon[74676]: 11.1 scrub starts
Feb 02 11:14:56 compute-0 ceph-mon[74676]: 11.1 scrub ok
Feb 02 11:14:56 compute-0 ceph-mon[74676]: 12.a scrub starts
Feb 02 11:14:56 compute-0 ceph-mon[74676]: 12.a scrub ok
Feb 02 11:14:56 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Feb 02 11:14:56 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Feb 02 11:14:57 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:14:57] "GET /metrics HTTP/1.1" 200 48289 "" "Prometheus/2.51.0"
Feb 02 11:14:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:14:57] "GET /metrics HTTP/1.1" 200 48289 "" "Prometheus/2.51.0"
Feb 02 11:14:57 compute-0 podman[106275]: 2026-02-02 11:14:57.118453922 +0000 UTC m=+0.052194189 container create 1500462a0c698b74099d82fbbdcb361cfbe693fb5e66de7b557ccc467a4fc482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:14:57 compute-0 systemd[1]: Started libpod-conmon-1500462a0c698b74099d82fbbdcb361cfbe693fb5e66de7b557ccc467a4fc482.scope.
Feb 02 11:14:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34735a2e4d3b6b022114fe62979badc3985f2c97f70061effdb1bafb30a0743/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:57 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec003db0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34735a2e4d3b6b022114fe62979badc3985f2c97f70061effdb1bafb30a0743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34735a2e4d3b6b022114fe62979badc3985f2c97f70061effdb1bafb30a0743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34735a2e4d3b6b022114fe62979badc3985f2c97f70061effdb1bafb30a0743/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:57 compute-0 podman[106275]: 2026-02-02 11:14:57.101075787 +0000 UTC m=+0.034816104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:14:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:57.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:14:57 compute-0 podman[106275]: 2026-02-02 11:14:57.235314469 +0000 UTC m=+0.169054756 container init 1500462a0c698b74099d82fbbdcb361cfbe693fb5e66de7b557ccc467a4fc482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:14:57 compute-0 podman[106275]: 2026-02-02 11:14:57.243181465 +0000 UTC m=+0.176921742 container start 1500462a0c698b74099d82fbbdcb361cfbe693fb5e66de7b557ccc467a4fc482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 11:14:57 compute-0 podman[106275]: 2026-02-02 11:14:57.273501144 +0000 UTC m=+0.207241431 container attach 1500462a0c698b74099d82fbbdcb361cfbe693fb5e66de7b557ccc467a4fc482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:14:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v33: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 2 objects/s recovering
Feb 02 11:14:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Feb 02 11:14:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Feb 02 11:14:57 compute-0 infallible_tu[106291]: {
Feb 02 11:14:57 compute-0 infallible_tu[106291]:     "1": [
Feb 02 11:14:57 compute-0 infallible_tu[106291]:         {
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "devices": [
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "/dev/loop3"
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             ],
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "lv_name": "ceph_lv0",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "lv_size": "21470642176",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "name": "ceph_lv0",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "tags": {
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.cluster_name": "ceph",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.crush_device_class": "",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.encrypted": "0",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.osd_id": "1",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.type": "block",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.vdo": "0",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:                 "ceph.with_tpm": "0"
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             },
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "type": "block",
Feb 02 11:14:57 compute-0 infallible_tu[106291]:             "vg_name": "ceph_vg0"
Feb 02 11:14:57 compute-0 infallible_tu[106291]:         }
Feb 02 11:14:57 compute-0 infallible_tu[106291]:     ]
Feb 02 11:14:57 compute-0 infallible_tu[106291]: }
Feb 02 11:14:57 compute-0 systemd[1]: libpod-1500462a0c698b74099d82fbbdcb361cfbe693fb5e66de7b557ccc467a4fc482.scope: Deactivated successfully.
Feb 02 11:14:57 compute-0 podman[106275]: 2026-02-02 11:14:57.525875879 +0000 UTC m=+0.459616146 container died 1500462a0c698b74099d82fbbdcb361cfbe693fb5e66de7b557ccc467a4fc482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:14:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:14:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:57.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d34735a2e4d3b6b022114fe62979badc3985f2c97f70061effdb1bafb30a0743-merged.mount: Deactivated successfully.
Feb 02 11:14:57 compute-0 podman[106275]: 2026-02-02 11:14:57.572647908 +0000 UTC m=+0.506388195 container remove 1500462a0c698b74099d82fbbdcb361cfbe693fb5e66de7b557ccc467a4fc482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tu, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 02 11:14:57 compute-0 systemd[1]: libpod-conmon-1500462a0c698b74099d82fbbdcb361cfbe693fb5e66de7b557ccc467a4fc482.scope: Deactivated successfully.
Feb 02 11:14:57 compute-0 sudo[106168]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:57 compute-0 sudo[106315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:14:57 compute-0 sudo[106315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:57 compute-0 sudo[106315]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:57 compute-0 sudo[106340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:14:57 compute-0 sudo[106340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:57 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb 02 11:14:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Feb 02 11:14:57 compute-0 ceph-mon[74676]: 8.f scrub starts
Feb 02 11:14:57 compute-0 ceph-mon[74676]: 8.f scrub ok
Feb 02 11:14:57 compute-0 ceph-mon[74676]: 11.4 scrub starts
Feb 02 11:14:57 compute-0 ceph-mon[74676]: 11.4 scrub ok
Feb 02 11:14:57 compute-0 ceph-mon[74676]: 12.e scrub starts
Feb 02 11:14:57 compute-0 ceph-mon[74676]: 12.e scrub ok
Feb 02 11:14:57 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Feb 02 11:14:57 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb 02 11:14:57 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 02 11:14:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Feb 02 11:14:57 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Feb 02 11:14:57 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 86 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=86) [1] r=0 lpr=86 pi=[55,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:57 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 86 pg[9.a( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=86) [1] r=0 lpr=86 pi=[55,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:14:58 compute-0 podman[106407]: 2026-02-02 11:14:58.09208858 +0000 UTC m=+0.036405067 container create 05b426d7fceee965c938c818d7bd78a016405b2ae1bea94256b09bec31880efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:14:58 compute-0 systemd[1]: Started libpod-conmon-05b426d7fceee965c938c818d7bd78a016405b2ae1bea94256b09bec31880efc.scope.
Feb 02 11:14:58 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:58 compute-0 podman[106407]: 2026-02-02 11:14:58.16155563 +0000 UTC m=+0.105872117 container init 05b426d7fceee965c938c818d7bd78a016405b2ae1bea94256b09bec31880efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:14:58 compute-0 podman[106407]: 2026-02-02 11:14:58.167116542 +0000 UTC m=+0.111433029 container start 05b426d7fceee965c938c818d7bd78a016405b2ae1bea94256b09bec31880efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:14:58 compute-0 podman[106407]: 2026-02-02 11:14:58.170351281 +0000 UTC m=+0.114667818 container attach 05b426d7fceee965c938c818d7bd78a016405b2ae1bea94256b09bec31880efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:14:58 compute-0 sleepy_stonebraker[106424]: 167 167
Feb 02 11:14:58 compute-0 systemd[1]: libpod-05b426d7fceee965c938c818d7bd78a016405b2ae1bea94256b09bec31880efc.scope: Deactivated successfully.
Feb 02 11:14:58 compute-0 podman[106407]: 2026-02-02 11:14:58.173128107 +0000 UTC m=+0.117444594 container died 05b426d7fceee965c938c818d7bd78a016405b2ae1bea94256b09bec31880efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:14:58 compute-0 podman[106407]: 2026-02-02 11:14:58.077878631 +0000 UTC m=+0.022195138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-29b037d9915a88dab0d52a44933fe913047a497e456d8b7ca282fbad0cf7d192-merged.mount: Deactivated successfully.
Feb 02 11:14:58 compute-0 podman[106407]: 2026-02-02 11:14:58.215560048 +0000 UTC m=+0.159876535 container remove 05b426d7fceee965c938c818d7bd78a016405b2ae1bea94256b09bec31880efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:14:58 compute-0 systemd[1]: libpod-conmon-05b426d7fceee965c938c818d7bd78a016405b2ae1bea94256b09bec31880efc.scope: Deactivated successfully.
Feb 02 11:14:58 compute-0 podman[106448]: 2026-02-02 11:14:58.339872259 +0000 UTC m=+0.040935431 container create 43b568a2917bed935f83dfe22095582be4f21d73d18a56f2762734dbbfc08e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:14:58 compute-0 systemd[1]: Started libpod-conmon-43b568a2917bed935f83dfe22095582be4f21d73d18a56f2762734dbbfc08e19.scope.
Feb 02 11:14:58 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e328d9cecc785260a869776e2f4ae6336376733bd9925ecfed8c2d9aa48c7c4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e328d9cecc785260a869776e2f4ae6336376733bd9925ecfed8c2d9aa48c7c4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e328d9cecc785260a869776e2f4ae6336376733bd9925ecfed8c2d9aa48c7c4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e328d9cecc785260a869776e2f4ae6336376733bd9925ecfed8c2d9aa48c7c4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:14:58 compute-0 podman[106448]: 2026-02-02 11:14:58.322203325 +0000 UTC m=+0.023266537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:14:58 compute-0 podman[106448]: 2026-02-02 11:14:58.427047524 +0000 UTC m=+0.128110726 container init 43b568a2917bed935f83dfe22095582be4f21d73d18a56f2762734dbbfc08e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:14:58 compute-0 podman[106448]: 2026-02-02 11:14:58.434194379 +0000 UTC m=+0.135257561 container start 43b568a2917bed935f83dfe22095582be4f21d73d18a56f2762734dbbfc08e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:14:58 compute-0 podman[106448]: 2026-02-02 11:14:58.438155288 +0000 UTC m=+0.139218500 container attach 43b568a2917bed935f83dfe22095582be4f21d73d18a56f2762734dbbfc08e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:14:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:58 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614009f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:58 compute-0 sshd-session[106406]: Received disconnect from 91.224.92.108 port 51602:11:  [preauth]
Feb 02 11:14:58 compute-0 sshd-session[106406]: Disconnected from authenticating user root 91.224.92.108 port 51602 [preauth]
Feb 02 11:14:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:58 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618004290 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:58 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Feb 02 11:14:58 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Feb 02 11:14:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Feb 02 11:14:58 compute-0 ceph-mon[74676]: pgmap v33: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 2 objects/s recovering
Feb 02 11:14:58 compute-0 ceph-mon[74676]: 11.e scrub starts
Feb 02 11:14:58 compute-0 ceph-mon[74676]: 11.e scrub ok
Feb 02 11:14:58 compute-0 ceph-mon[74676]: 8.19 scrub starts
Feb 02 11:14:58 compute-0 ceph-mon[74676]: 8.19 scrub ok
Feb 02 11:14:58 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 02 11:14:58 compute-0 ceph-mon[74676]: osdmap e86: 3 total, 3 up, 3 in
Feb 02 11:14:58 compute-0 ceph-mon[74676]: 9.2 scrub starts
Feb 02 11:14:58 compute-0 ceph-mon[74676]: 9.2 scrub ok
Feb 02 11:14:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Feb 02 11:14:59 compute-0 lvm[106540]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:14:59 compute-0 lvm[106540]: VG ceph_vg0 finished
Feb 02 11:14:59 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Feb 02 11:14:59 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 87 pg[9.a( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=87) [1]/[0] r=-1 lpr=87 pi=[55,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:59 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 87 pg[9.a( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=87) [1]/[0] r=-1 lpr=87 pi=[55,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:59 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 87 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=87) [1]/[0] r=-1 lpr=87 pi=[55,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:14:59 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 87 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=87) [1]/[0] r=-1 lpr=87 pi=[55,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:14:59 compute-0 blissful_benz[106465]: {}
Feb 02 11:14:59 compute-0 systemd[1]: libpod-43b568a2917bed935f83dfe22095582be4f21d73d18a56f2762734dbbfc08e19.scope: Deactivated successfully.
Feb 02 11:14:59 compute-0 podman[106448]: 2026-02-02 11:14:59.074356814 +0000 UTC m=+0.775420016 container died 43b568a2917bed935f83dfe22095582be4f21d73d18a56f2762734dbbfc08e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_benz, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e328d9cecc785260a869776e2f4ae6336376733bd9925ecfed8c2d9aa48c7c4d-merged.mount: Deactivated successfully.
Feb 02 11:14:59 compute-0 podman[106448]: 2026-02-02 11:14:59.115406226 +0000 UTC m=+0.816469408 container remove 43b568a2917bed935f83dfe22095582be4f21d73d18a56f2762734dbbfc08e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_benz, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:14:59 compute-0 systemd[1]: libpod-conmon-43b568a2917bed935f83dfe22095582be4f21d73d18a56f2762734dbbfc08e19.scope: Deactivated successfully.
Feb 02 11:14:59 compute-0 sudo[106340]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:14:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:14:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:14:59 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:14:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:14:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:14:59.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:59 compute-0 sudo[106556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:14:59 compute-0 sudo[106556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:14:59 compute-0 sudo[106556]: pam_unix(sudo:session): session closed for user root
Feb 02 11:14:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v36: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:14:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Feb 02 11:14:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Feb 02 11:14:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:14:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:14:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:14:59.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:14:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:14:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:14:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:14:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:14:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:14:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:14:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:14:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:14:59 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.e scrub starts
Feb 02 11:14:59 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.e scrub ok
Feb 02 11:15:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Feb 02 11:15:00 compute-0 ceph-mon[74676]: 12.18 scrub starts
Feb 02 11:15:00 compute-0 ceph-mon[74676]: 12.18 scrub ok
Feb 02 11:15:00 compute-0 ceph-mon[74676]: 8.18 scrub starts
Feb 02 11:15:00 compute-0 ceph-mon[74676]: 8.18 scrub ok
Feb 02 11:15:00 compute-0 ceph-mon[74676]: osdmap e87: 3 total, 3 up, 3 in
Feb 02 11:15:00 compute-0 ceph-mon[74676]: 9.c scrub starts
Feb 02 11:15:00 compute-0 ceph-mon[74676]: 9.c scrub ok
Feb 02 11:15:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:15:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:15:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Feb 02 11:15:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:15:00 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 02 11:15:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Feb 02 11:15:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Feb 02 11:15:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:00 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec003db0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:15:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Feb 02 11:15:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Feb 02 11:15:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Feb 02 11:15:00 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 89 pg[9.a( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=6 ec=55/37 lis/c=87/55 les/c/f=88/56/0 sis=89) [1] r=0 lpr=89 pi=[55,89)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:00 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 89 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=5 ec=55/37 lis/c=87/55 les/c/f=88/56/0 sis=89) [1] r=0 lpr=89 pi=[55,89)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:00 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 89 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=5 ec=55/37 lis/c=87/55 les/c/f=88/56/0 sis=89) [1] r=0 lpr=89 pi=[55,89)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:00 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 89 pg[9.a( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=6 ec=55/37 lis/c=87/55 les/c/f=88/56/0 sis=89) [1] r=0 lpr=89 pi=[55,89)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:00 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614009f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:00 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Feb 02 11:15:00 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Feb 02 11:15:01 compute-0 ceph-mon[74676]: pgmap v36: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:01 compute-0 ceph-mon[74676]: 9.9 scrub starts
Feb 02 11:15:01 compute-0 ceph-mon[74676]: 9.9 scrub ok
Feb 02 11:15:01 compute-0 ceph-mon[74676]: 9.e scrub starts
Feb 02 11:15:01 compute-0 ceph-mon[74676]: 9.e scrub ok
Feb 02 11:15:01 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 02 11:15:01 compute-0 ceph-mon[74676]: osdmap e88: 3 total, 3 up, 3 in
Feb 02 11:15:01 compute-0 ceph-mon[74676]: 9.1 scrub starts
Feb 02 11:15:01 compute-0 ceph-mon[74676]: 9.1 scrub ok
Feb 02 11:15:01 compute-0 ceph-mon[74676]: osdmap e89: 3 total, 3 up, 3 in
Feb 02 11:15:01 compute-0 sudo[106583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:15:01 compute-0 sudo[106583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:15:01 compute-0 sudo[106583]: pam_unix(sudo:session): session closed for user root
Feb 02 11:15:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:01 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618004290 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:01.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v39: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Feb 02 11:15:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Feb 02 11:15:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Feb 02 11:15:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 02 11:15:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Feb 02 11:15:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:01.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:01 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Feb 02 11:15:01 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 90 pg[9.a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=6 ec=55/37 lis/c=87/55 les/c/f=88/56/0 sis=89) [1] r=0 lpr=89 pi=[55,89)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:01 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 90 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=5 ec=55/37 lis/c=87/55 les/c/f=88/56/0 sis=89) [1] r=0 lpr=89 pi=[55,89)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:01 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Feb 02 11:15:01 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Feb 02 11:15:02 compute-0 ceph-mon[74676]: 9.d deep-scrub starts
Feb 02 11:15:02 compute-0 ceph-mon[74676]: 9.d deep-scrub ok
Feb 02 11:15:02 compute-0 ceph-mon[74676]: 9.6 scrub starts
Feb 02 11:15:02 compute-0 ceph-mon[74676]: 9.6 scrub ok
Feb 02 11:15:02 compute-0 ceph-mon[74676]: 9.0 scrub starts
Feb 02 11:15:02 compute-0 ceph-mon[74676]: 9.0 scrub ok
Feb 02 11:15:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Feb 02 11:15:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 02 11:15:02 compute-0 ceph-mon[74676]: osdmap e90: 3 total, 3 up, 3 in
Feb 02 11:15:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:02 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:02 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec003db0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:02 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.a scrub starts
Feb 02 11:15:02 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.a scrub ok
Feb 02 11:15:03 compute-0 ceph-mon[74676]: pgmap v39: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:03 compute-0 ceph-mon[74676]: 9.13 scrub starts
Feb 02 11:15:03 compute-0 ceph-mon[74676]: 9.13 scrub ok
Feb 02 11:15:03 compute-0 ceph-mon[74676]: 9.1e scrub starts
Feb 02 11:15:03 compute-0 ceph-mon[74676]: 9.1e scrub ok
Feb 02 11:15:03 compute-0 ceph-mon[74676]: 9.4 scrub starts
Feb 02 11:15:03 compute-0 ceph-mon[74676]: 9.4 scrub ok
Feb 02 11:15:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:03 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614009f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:03.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111503 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:15:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v41: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Feb 02 11:15:03 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Feb 02 11:15:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:03.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:03 compute-0 sshd-session[106611]: Accepted publickey for zuul from 192.168.122.30 port 45898 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:15:03 compute-0 systemd-logind[793]: New session 39 of user zuul.
Feb 02 11:15:03 compute-0 systemd[1]: Started Session 39 of User zuul.
Feb 02 11:15:03 compute-0 sshd-session[106611]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:15:03 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Feb 02 11:15:03 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Feb 02 11:15:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Feb 02 11:15:04 compute-0 ceph-mon[74676]: 9.f scrub starts
Feb 02 11:15:04 compute-0 ceph-mon[74676]: 9.f scrub ok
Feb 02 11:15:04 compute-0 ceph-mon[74676]: 9.a scrub starts
Feb 02 11:15:04 compute-0 ceph-mon[74676]: 9.a scrub ok
Feb 02 11:15:04 compute-0 ceph-mon[74676]: 9.1c scrub starts
Feb 02 11:15:04 compute-0 ceph-mon[74676]: 9.1c scrub ok
Feb 02 11:15:04 compute-0 ceph-mon[74676]: pgmap v41: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Feb 02 11:15:04 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 02 11:15:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Feb 02 11:15:04 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Feb 02 11:15:04 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 91 pg[9.1d( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=74/74 les/c/f=75/75/0 sis=91) [1] r=0 lpr=91 pi=[74,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:04 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 91 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=74/74 les/c/f=75/75/0 sis=91) [1] r=0 lpr=91 pi=[74,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:04 compute-0 python3.9[106764]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 02 11:15:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:04 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618004290 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:04 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Feb 02 11:15:05 compute-0 ceph-mon[74676]: 9.b scrub starts
Feb 02 11:15:05 compute-0 ceph-mon[74676]: 9.b scrub ok
Feb 02 11:15:05 compute-0 ceph-mon[74676]: 9.1a scrub starts
Feb 02 11:15:05 compute-0 ceph-mon[74676]: 9.1a scrub ok
Feb 02 11:15:05 compute-0 ceph-mon[74676]: 9.12 scrub starts
Feb 02 11:15:05 compute-0 ceph-mon[74676]: 9.12 scrub ok
Feb 02 11:15:05 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 02 11:15:05 compute-0 ceph-mon[74676]: osdmap e91: 3 total, 3 up, 3 in
Feb 02 11:15:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Feb 02 11:15:05 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Feb 02 11:15:05 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 92 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=74/74 les/c/f=75/75/0 sis=92) [1]/[2] r=-1 lpr=92 pi=[74,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:05 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 92 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=74/74 les/c/f=75/75/0 sis=92) [1]/[2] r=-1 lpr=92 pi=[74,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:05 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 92 pg[9.1d( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=74/74 les/c/f=75/75/0 sis=92) [1]/[2] r=-1 lpr=92 pi=[74,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:05 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 92 pg[9.1d( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=74/74 les/c/f=75/75/0 sis=92) [1]/[2] r=-1 lpr=92 pi=[74,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:05 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:05.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v44: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Feb 02 11:15:05 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Feb 02 11:15:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:15:05 compute-0 python3.9[106942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:15:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:05.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Feb 02 11:15:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 02 11:15:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Feb 02 11:15:06 compute-0 ceph-mon[74676]: 9.8 scrub starts
Feb 02 11:15:06 compute-0 ceph-mon[74676]: 9.8 scrub ok
Feb 02 11:15:06 compute-0 ceph-mon[74676]: osdmap e92: 3 total, 3 up, 3 in
Feb 02 11:15:06 compute-0 ceph-mon[74676]: pgmap v44: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:06 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Feb 02 11:15:06 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Feb 02 11:15:06 compute-0 sudo[107097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itgvfmttluvflevreklafskgglopqnbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030906.021148-88-210922224881337/AnsiballZ_command.py'
Feb 02 11:15:06 compute-0 sudo[107097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:15:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:06 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c001080 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:06 compute-0 python3.9[107099]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:15:06 compute-0 sudo[107097]: pam_unix(sudo:session): session closed for user root
Feb 02 11:15:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:06 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0000b60 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:06] "GET /metrics HTTP/1.1" 200 48279 "" "Prometheus/2.51.0"
Feb 02 11:15:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:06] "GET /metrics HTTP/1.1" 200 48279 "" "Prometheus/2.51.0"
Feb 02 11:15:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Feb 02 11:15:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Feb 02 11:15:07 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Feb 02 11:15:07 compute-0 ceph-mon[74676]: 9.3 scrub starts
Feb 02 11:15:07 compute-0 ceph-mon[74676]: 9.3 scrub ok
Feb 02 11:15:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 02 11:15:07 compute-0 ceph-mon[74676]: osdmap e93: 3 total, 3 up, 3 in
Feb 02 11:15:07 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 94 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=5 ec=55/37 lis/c=92/74 les/c/f=93/75/0 sis=94) [1] r=0 lpr=94 pi=[74,94)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:07 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 94 pg[9.d( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=8 ec=55/37 lis/c=92/74 les/c/f=93/75/0 sis=94) [1] r=0 lpr=94 pi=[74,94)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:07 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 94 pg[9.d( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=8 ec=55/37 lis/c=92/74 les/c/f=93/75/0 sis=94) [1] r=0 lpr=94 pi=[74,94)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:07 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 94 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=5 ec=55/37 lis/c=92/74 les/c/f=93/75/0 sis=94) [1] r=0 lpr=94 pi=[74,94)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:07 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604000d00 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:07.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:07 compute-0 sudo[107251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vozvasmafojiajqqsiwyoqupkmzsagsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030907.0154471-124-198670828627780/AnsiballZ_stat.py'
Feb 02 11:15:07 compute-0 sudo[107251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:15:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:07.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:07 compute-0 python3.9[107253]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:15:07 compute-0 sudo[107251]: pam_unix(sudo:session): session closed for user root
Feb 02 11:15:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Feb 02 11:15:08 compute-0 ceph-mon[74676]: 9.7 scrub starts
Feb 02 11:15:08 compute-0 ceph-mon[74676]: 9.7 scrub ok
Feb 02 11:15:08 compute-0 ceph-mon[74676]: osdmap e94: 3 total, 3 up, 3 in
Feb 02 11:15:08 compute-0 ceph-mon[74676]: pgmap v47: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Feb 02 11:15:08 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Feb 02 11:15:08 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 95 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=92/74 les/c/f=93/75/0 sis=94) [1] r=0 lpr=94 pi=[74,94)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:08 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 95 pg[9.d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=8 ec=55/37 lis/c=92/74 les/c/f=93/75/0 sis=94) [1] r=0 lpr=94 pi=[74,94)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:08 compute-0 sudo[107406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjtzvhkhqbbwrfbwqtxydoztpzsodadc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030907.9022982-157-68190173319197/AnsiballZ_file.py'
Feb 02 11:15:08 compute-0 sudo[107406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:15:08 compute-0 python3.9[107408]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:15:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:08 compute-0 sudo[107406]: pam_unix(sudo:session): session closed for user root
Feb 02 11:15:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c001bc0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:08 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Feb 02 11:15:08 compute-0 ceph-osd[83123]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Feb 02 11:15:09 compute-0 sudo[107559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kynjmmqcfsbqtqbuydhzlhgmjlijgamx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030908.8417308-184-76767309380697/AnsiballZ_file.py'
Feb 02 11:15:09 compute-0 sudo[107559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:15:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:09 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f00016a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:09.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:09 compute-0 ceph-mon[74676]: 9.5 deep-scrub starts
Feb 02 11:15:09 compute-0 ceph-mon[74676]: 9.5 deep-scrub ok
Feb 02 11:15:09 compute-0 ceph-mon[74676]: osdmap e95: 3 total, 3 up, 3 in
Feb 02 11:15:09 compute-0 python3.9[107561]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:15:09 compute-0 sudo[107559]: pam_unix(sudo:session): session closed for user root
Feb 02 11:15:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:09.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:10 compute-0 python3.9[107712]: ansible-ansible.builtin.service_facts Invoked
Feb 02 11:15:10 compute-0 network[107729]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 11:15:10 compute-0 network[107730]: 'network-scripts' will be removed from distribution in near future.
Feb 02 11:15:10 compute-0 network[107731]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 11:15:10 compute-0 ceph-mon[74676]: 9.1b deep-scrub starts
Feb 02 11:15:10 compute-0 ceph-mon[74676]: 9.1b deep-scrub ok
Feb 02 11:15:10 compute-0 ceph-mon[74676]: 9.1d scrub starts
Feb 02 11:15:10 compute-0 ceph-mon[74676]: 9.1d scrub ok
Feb 02 11:15:10 compute-0 ceph-mon[74676]: pgmap v49: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:10 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604001820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:15:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:10 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:11 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c001bc0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:11.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:11 compute-0 ceph-mon[74676]: 9.18 scrub starts
Feb 02 11:15:11 compute-0 ceph-mon[74676]: 9.18 scrub ok
Feb 02 11:15:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Feb 02 11:15:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Feb 02 11:15:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:11.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Feb 02 11:15:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 02 11:15:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Feb 02 11:15:12 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Feb 02 11:15:12 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 96 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=67/67 les/c/f=68/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:12 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 96 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=67/67 les/c/f=68/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:12 compute-0 ceph-mon[74676]: 9.19 scrub starts
Feb 02 11:15:12 compute-0 ceph-mon[74676]: 9.19 scrub ok
Feb 02 11:15:12 compute-0 ceph-mon[74676]: pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:12 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Feb 02 11:15:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:12 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f00016a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:12 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604001820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:12 compute-0 python3.9[107994]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:15:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:13 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:13.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Feb 02 11:15:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Feb 02 11:15:13 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Feb 02 11:15:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 97 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=67/67 les/c/f=68/68/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[67,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 97 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=67/67 les/c/f=68/68/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[67,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 97 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=67/67 les/c/f=68/68/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[67,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:13 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 97 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=67/67 les/c/f=68/68/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[67,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:13 compute-0 ceph-mon[74676]: 9.1f scrub starts
Feb 02 11:15:13 compute-0 ceph-mon[74676]: 9.1f scrub ok
Feb 02 11:15:13 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 02 11:15:13 compute-0 ceph-mon[74676]: osdmap e96: 3 total, 3 up, 3 in
Feb 02 11:15:13 compute-0 ceph-mon[74676]: osdmap e97: 3 total, 3 up, 3 in
Feb 02 11:15:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Feb 02 11:15:13 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Feb 02 11:15:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:13.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:13 compute-0 python3.9[108144]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:15:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Feb 02 11:15:14 compute-0 ceph-mon[74676]: pgmap v53: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Feb 02 11:15:14 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb 02 11:15:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Feb 02 11:15:14 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Feb 02 11:15:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:14 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:15:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:15:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:14 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f00016a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:15 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604001820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:15 compute-0 python3.9[108300]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:15:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Feb 02 11:15:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb 02 11:15:15 compute-0 ceph-mon[74676]: osdmap e98: 3 total, 3 up, 3 in
Feb 02 11:15:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:15:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Feb 02 11:15:15 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Feb 02 11:15:15 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 99 pg[9.10( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=98) [1] r=0 lpr=99 pi=[55,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:15 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 99 pg[9.f( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=7 ec=55/37 lis/c=97/67 les/c/f=98/68/0 sis=99) [1] r=0 lpr=99 pi=[67,99)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:15 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 99 pg[9.f( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=7 ec=55/37 lis/c=97/67 les/c/f=98/68/0 sis=99) [1] r=0 lpr=99 pi=[67,99)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:15 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 99 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=5 ec=55/37 lis/c=97/67 les/c/f=98/68/0 sis=99) [1] r=0 lpr=99 pi=[67,99)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:15 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 99 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=5 ec=55/37 lis/c=97/67 les/c/f=98/68/0 sis=99) [1] r=0 lpr=99 pi=[67,99)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Feb 02 11:15:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Feb 02 11:15:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:15:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Feb 02 11:15:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb 02 11:15:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Feb 02 11:15:15 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Feb 02 11:15:15 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=100) [1] r=0 lpr=100 pi=[55,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:15 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 100 pg[9.10( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:15 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 100 pg[9.10( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:15.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:15 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 100 pg[9.f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=7 ec=55/37 lis/c=97/67 les/c/f=98/68/0 sis=99) [1] r=0 lpr=99 pi=[67,99)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:15 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 100 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=97/67 les/c/f=98/68/0 sis=99) [1] r=0 lpr=99 pi=[67,99)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:16 compute-0 sudo[108457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-horptezccdsttozujesvxurbmbdlzkyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030915.8548522-328-115241170627407/AnsiballZ_setup.py'
Feb 02 11:15:16 compute-0 sudo[108457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:15:16 compute-0 python3.9[108459]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:15:16 compute-0 ceph-mon[74676]: osdmap e99: 3 total, 3 up, 3 in
Feb 02 11:15:16 compute-0 ceph-mon[74676]: pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Feb 02 11:15:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb 02 11:15:16 compute-0 ceph-mon[74676]: osdmap e100: 3 total, 3 up, 3 in
Feb 02 11:15:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002c40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Feb 02 11:15:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Feb 02 11:15:16 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Feb 02 11:15:16 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 101 pg[9.11( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[55,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:16 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 101 pg[9.11( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[55,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:16 compute-0 sudo[108457]: pam_unix(sudo:session): session closed for user root
Feb 02 11:15:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:16 compute-0 sudo[108542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gegjplepjlpxwyaquoxlgfkmfebxrqzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030915.8548522-328-115241170627407/AnsiballZ_dnf.py'
Feb 02 11:15:16 compute-0 sudo[108542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:15:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:16] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:15:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:16] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:15:17 compute-0 python3.9[108544]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:15:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0002b10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:17.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 1 unknown, 2 peering, 350 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Feb 02 11:15:17 compute-0 ceph-mon[74676]: osdmap e101: 3 total, 3 up, 3 in
Feb 02 11:15:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Feb 02 11:15:17 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Feb 02 11:15:17 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 102 pg[9.10( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=2 ec=55/37 lis/c=100/55 les/c/f=101/56/0 sis=102) [1] r=0 lpr=102 pi=[55,102)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:17 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 102 pg[9.10( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=2 ec=55/37 lis/c=100/55 les/c/f=101/56/0 sis=102) [1] r=0 lpr=102 pi=[55,102)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:18 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604002cb0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Feb 02 11:15:18 compute-0 ceph-mon[74676]: pgmap v59: 353 pgs: 1 unknown, 2 peering, 350 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:18 compute-0 ceph-mon[74676]: osdmap e102: 3 total, 3 up, 3 in
Feb 02 11:15:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Feb 02 11:15:18 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Feb 02 11:15:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 103 pg[9.11( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=5 ec=55/37 lis/c=101/55 les/c/f=102/56/0 sis=103) [1] r=0 lpr=103 pi=[55,103)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 103 pg[9.11( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=5 ec=55/37 lis/c=101/55 les/c/f=102/56/0 sis=103) [1] r=0 lpr=103 pi=[55,103)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:18 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 103 pg[9.10( v 43'1029 (0'0,43'1029] local-lis/les=102/103 n=2 ec=55/37 lis/c=100/55 les/c/f=101/56/0 sis=102) [1] r=0 lpr=102 pi=[55,102)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:18 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002c40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:19 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:19.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 1 unknown, 2 peering, 350 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:19.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Feb 02 11:15:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Feb 02 11:15:19 compute-0 ceph-mon[74676]: osdmap e103: 3 total, 3 up, 3 in
Feb 02 11:15:19 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Feb 02 11:15:19 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 104 pg[9.11( v 43'1029 (0'0,43'1029] local-lis/les=103/104 n=5 ec=55/37 lis/c=101/55 les/c/f=102/56/0 sis=103) [1] r=0 lpr=103 pi=[55,103)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:20 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0002b10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:15:20 compute-0 ceph-mon[74676]: pgmap v62: 353 pgs: 1 unknown, 2 peering, 350 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:20 compute-0 ceph-mon[74676]: osdmap e104: 3 total, 3 up, 3 in
Feb 02 11:15:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:20 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0002b10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:21 compute-0 sudo[108616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:15:21 compute-0 sudo[108616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:15:21 compute-0 sudo[108616]: pam_unix(sudo:session): session closed for user root
Feb 02 11:15:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:21 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002c40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:21.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 625 B/s rd, 0 op/s; 22 B/s, 1 objects/s recovering
Feb 02 11:15:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Feb 02 11:15:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Feb 02 11:15:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:21.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Feb 02 11:15:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Feb 02 11:15:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb 02 11:15:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Feb 02 11:15:21 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Feb 02 11:15:21 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 105 pg[9.12( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=105) [1] r=0 lpr=105 pi=[55,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Feb 02 11:15:22 compute-0 ceph-mon[74676]: pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 625 B/s rd, 0 op/s; 22 B/s, 1 objects/s recovering
Feb 02 11:15:22 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb 02 11:15:22 compute-0 ceph-mon[74676]: osdmap e105: 3 total, 3 up, 3 in
Feb 02 11:15:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Feb 02 11:15:22 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Feb 02 11:15:22 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 106 pg[9.12( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=106) [1]/[0] r=-1 lpr=106 pi=[55,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:22 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 106 pg[9.12( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=106) [1]/[0] r=-1 lpr=106 pi=[55,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0002b10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:23 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0002b10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:23.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 630 B/s rd, 0 op/s; 22 B/s, 1 objects/s recovering
Feb 02 11:15:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Feb 02 11:15:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Feb 02 11:15:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:23.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Feb 02 11:15:23 compute-0 ceph-mon[74676]: osdmap e106: 3 total, 3 up, 3 in
Feb 02 11:15:23 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Feb 02 11:15:23 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb 02 11:15:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Feb 02 11:15:23 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Feb 02 11:15:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:24 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Feb 02 11:15:24 compute-0 ceph-mon[74676]: pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 630 B/s rd, 0 op/s; 22 B/s, 1 objects/s recovering
Feb 02 11:15:24 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb 02 11:15:24 compute-0 ceph-mon[74676]: osdmap e107: 3 total, 3 up, 3 in
Feb 02 11:15:24 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Feb 02 11:15:24 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Feb 02 11:15:24 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 108 pg[9.12( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=106/55 les/c/f=107/56/0 sis=108) [1] r=0 lpr=108 pi=[55,108)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:24 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 108 pg[9.12( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=106/55 les/c/f=107/56/0 sis=108) [1] r=0 lpr=108 pi=[55,108)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:24 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:25 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Feb 02 11:15:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Feb 02 11:15:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:15:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:25.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Feb 02 11:15:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb 02 11:15:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Feb 02 11:15:25 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Feb 02 11:15:25 compute-0 ceph-mon[74676]: osdmap e108: 3 total, 3 up, 3 in
Feb 02 11:15:25 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Feb 02 11:15:25 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 109 pg[9.12( v 43'1029 (0'0,43'1029] local-lis/les=108/109 n=4 ec=55/37 lis/c=106/55 les/c/f=107/56/0 sis=108) [1] r=0 lpr=108 pi=[55,108)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:26 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:26 compute-0 ceph-mon[74676]: pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:26 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb 02 11:15:26 compute-0 ceph-mon[74676]: osdmap e109: 3 total, 3 up, 3 in
Feb 02 11:15:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:26 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:26] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:15:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:26] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:15:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:27 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66040035d0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:27.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111527 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:15:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 0 objects/s recovering
Feb 02 11:15:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:27.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:28 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:28 compute-0 ceph-mon[74676]: pgmap v72: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 0 objects/s recovering
Feb 02 11:15:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:28 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:29 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:29.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:15:29
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Some PGs (0.002833) are inactive; try again later
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:15:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:15:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:15:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:29.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:15:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:15:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:15:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:30 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66040035d0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:15:30 compute-0 ceph-mon[74676]: pgmap v73: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Feb 02 11:15:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:30 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 263 B/s rd, 0 op/s; 14 B/s, 0 objects/s recovering
Feb 02 11:15:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Feb 02 11:15:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Feb 02 11:15:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:31.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Feb 02 11:15:31 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Feb 02 11:15:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb 02 11:15:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Feb 02 11:15:31 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Feb 02 11:15:31 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 110 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=74/74 les/c/f=75/75/0 sis=110) [1] r=0 lpr=110 pi=[74,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:32 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Feb 02 11:15:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Feb 02 11:15:32 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Feb 02 11:15:32 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 111 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=74/74 les/c/f=75/75/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[74,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:32 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 111 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=74/74 les/c/f=75/75/0 sis=111) [1]/[2] r=-1 lpr=111 pi=[74,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:32 compute-0 ceph-mon[74676]: pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 263 B/s rd, 0 op/s; 14 B/s, 0 objects/s recovering
Feb 02 11:15:32 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb 02 11:15:32 compute-0 ceph-mon[74676]: osdmap e110: 3 total, 3 up, 3 in
Feb 02 11:15:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:32 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:33 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:33.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 264 B/s rd, 0 op/s
Feb 02 11:15:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Feb 02 11:15:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Feb 02 11:15:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:33.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Feb 02 11:15:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:34 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:34 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:35 compute-0 ceph-mon[74676]: osdmap e111: 3 total, 3 up, 3 in
Feb 02 11:15:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Feb 02 11:15:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb 02 11:15:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Feb 02 11:15:35 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Feb 02 11:15:35 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 112 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=112 pruub=8.904110909s) [2] r=-1 lpr=112 pi=[76,112)/1 crt=43'1029 mlcod 0'0 active pruub 262.389068604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:35 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 112 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=112 pruub=8.903961182s) [2] r=-1 lpr=112 pi=[76,112)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 262.389068604s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:35 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:35.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:15:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Feb 02 11:15:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Feb 02 11:15:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:15:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Feb 02 11:15:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb 02 11:15:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Feb 02 11:15:35 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Feb 02 11:15:35 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:35 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:35 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 luod=0'0 crt=43'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:35 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:35.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:35 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:15:36 compute-0 ceph-mon[74676]: pgmap v77: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 264 B/s rd, 0 op/s
Feb 02 11:15:36 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb 02 11:15:36 compute-0 ceph-mon[74676]: osdmap e112: 3 total, 3 up, 3 in
Feb 02 11:15:36 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Feb 02 11:15:36 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb 02 11:15:36 compute-0 ceph-mon[74676]: osdmap e113: 3 total, 3 up, 3 in
Feb 02 11:15:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:36 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Feb 02 11:15:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Feb 02 11:15:36 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Feb 02 11:15:36 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:36 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:36 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:36] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Feb 02 11:15:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:36] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Feb 02 11:15:37 compute-0 ceph-mon[74676]: pgmap v79: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:15:37 compute-0 ceph-mon[74676]: osdmap e114: 3 total, 3 up, 3 in
Feb 02 11:15:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:37 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140022a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 220 B/s wr, 1 op/s
Feb 02 11:15:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Feb 02 11:15:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Feb 02 11:15:37 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115 pruub=15.003785133s) [2] async=[2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 43'1029 active pruub 270.924621582s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:37 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Feb 02 11:15:37 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115 pruub=15.003676414s) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 270.924621582s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:37.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:38 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Feb 02 11:15:38 compute-0 ceph-mon[74676]: pgmap v82: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 220 B/s wr, 1 op/s
Feb 02 11:15:38 compute-0 ceph-mon[74676]: osdmap e115: 3 total, 3 up, 3 in
Feb 02 11:15:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Feb 02 11:15:38 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Feb 02 11:15:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:38 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:38 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:15:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:38 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:15:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:39 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003980 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:39.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Feb 02 11:15:39 compute-0 ceph-mon[74676]: osdmap e116: 3 total, 3 up, 3 in
Feb 02 11:15:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:39.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:15:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:40 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140022a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:40 compute-0 ceph-mon[74676]: pgmap v85: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Feb 02 11:15:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:40 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:41 compute-0 sudo[108716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:15:41 compute-0 sudo[108716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:15:41 compute-0 sudo[108716]: pam_unix(sudo:session): session closed for user root
Feb 02 11:15:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:41 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:41.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s; 37 B/s, 1 objects/s recovering
Feb 02 11:15:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Feb 02 11:15:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Feb 02 11:15:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:41.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Feb 02 11:15:41 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb 02 11:15:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Feb 02 11:15:41 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Feb 02 11:15:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Feb 02 11:15:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:42 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:15:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:42 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003b20 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:42 compute-0 ceph-mon[74676]: pgmap v86: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s; 37 B/s, 1 objects/s recovering
Feb 02 11:15:42 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb 02 11:15:42 compute-0 ceph-mon[74676]: osdmap e117: 3 total, 3 up, 3 in
Feb 02 11:15:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:42 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140022a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:43 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:43.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Feb 02 11:15:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Feb 02 11:15:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Feb 02 11:15:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:43.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Feb 02 11:15:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Feb 02 11:15:43 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb 02 11:15:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Feb 02 11:15:43 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Feb 02 11:15:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:15:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:15:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:44 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Feb 02 11:15:44 compute-0 ceph-mon[74676]: pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Feb 02 11:15:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb 02 11:15:44 compute-0 ceph-mon[74676]: osdmap e118: 3 total, 3 up, 3 in
Feb 02 11:15:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:15:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Feb 02 11:15:44 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Feb 02 11:15:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:44 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003b40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:45 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140091b0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:45.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Feb 02 11:15:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Feb 02 11:15:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Feb 02 11:15:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 11:15:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:45.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Feb 02 11:15:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb 02 11:15:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Feb 02 11:15:45 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Feb 02 11:15:45 compute-0 ceph-mon[74676]: osdmap e119: 3 total, 3 up, 3 in
Feb 02 11:15:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Feb 02 11:15:46 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120 pruub=11.442985535s) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 active pruub 275.912414551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:46 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120 pruub=11.442924500s) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 275.912414551s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:46 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Feb 02 11:15:46 compute-0 ceph-mon[74676]: pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Feb 02 11:15:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb 02 11:15:46 compute-0 ceph-mon[74676]: osdmap e120: 3 total, 3 up, 3 in
Feb 02 11:15:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Feb 02 11:15:46 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Feb 02 11:15:46 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:46 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:46 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:46] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Feb 02 11:15:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:46] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Feb 02 11:15:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:47 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003b60 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111547 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:15:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:15:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:47.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:15:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:15:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:47.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Feb 02 11:15:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Feb 02 11:15:47 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Feb 02 11:15:47 compute-0 ceph-mon[74676]: osdmap e121: 3 total, 3 up, 3 in
Feb 02 11:15:47 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:48 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140091b0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Feb 02 11:15:48 compute-0 ceph-mon[74676]: pgmap v94: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:15:48 compute-0 ceph-mon[74676]: osdmap e122: 3 total, 3 up, 3 in
Feb 02 11:15:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Feb 02 11:15:48 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Feb 02 11:15:48 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123 pruub=14.969496727s) [0] async=[0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 43'1029 active pruub 282.125457764s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:48 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123 pruub=14.968793869s) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 282.125457764s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:48 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:49 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:49.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:15:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:49.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Feb 02 11:15:49 compute-0 ceph-mon[74676]: osdmap e123: 3 total, 3 up, 3 in
Feb 02 11:15:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Feb 02 11:15:49 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Feb 02 11:15:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:15:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:50 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003b80 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:50 compute-0 ceph-mon[74676]: pgmap v97: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:15:50 compute-0 ceph-mon[74676]: osdmap e124: 3 total, 3 up, 3 in
Feb 02 11:15:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:50 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140091b0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:51 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:51.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 2 objects/s recovering
Feb 02 11:15:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Feb 02 11:15:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Feb 02 11:15:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Feb 02 11:15:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb 02 11:15:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Feb 02 11:15:51 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Feb 02 11:15:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Feb 02 11:15:52 compute-0 systemd[93326]: Starting Mark boot as successful...
Feb 02 11:15:52 compute-0 systemd[93326]: Finished Mark boot as successful.
Feb 02 11:15:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:52 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Feb 02 11:15:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Feb 02 11:15:52 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Feb 02 11:15:52 compute-0 ceph-mon[74676]: pgmap v99: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 2 objects/s recovering
Feb 02 11:15:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb 02 11:15:52 compute-0 ceph-mon[74676]: osdmap e125: 3 total, 3 up, 3 in
Feb 02 11:15:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:52 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003ba0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:53 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a540 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:53.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 2 objects/s recovering
Feb 02 11:15:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Feb 02 11:15:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Feb 02 11:15:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:53.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Feb 02 11:15:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb 02 11:15:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Feb 02 11:15:53 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Feb 02 11:15:53 compute-0 ceph-mon[74676]: osdmap e126: 3 total, 3 up, 3 in
Feb 02 11:15:53 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Feb 02 11:15:53 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb 02 11:15:53 compute-0 ceph-mon[74676]: osdmap e127: 3 total, 3 up, 3 in
Feb 02 11:15:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:54 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:54 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Feb 02 11:15:54 compute-0 ceph-mon[74676]: pgmap v102: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 2 objects/s recovering
Feb 02 11:15:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Feb 02 11:15:54 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Feb 02 11:15:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:55 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003bc0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:55.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Feb 02 11:15:55 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Feb 02 11:15:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:15:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:15:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:55.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:15:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Feb 02 11:15:55 compute-0 ceph-mon[74676]: osdmap e128: 3 total, 3 up, 3 in
Feb 02 11:15:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Feb 02 11:15:55 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb 02 11:15:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Feb 02 11:15:55 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Feb 02 11:15:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:56 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a540 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:56 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:56 compute-0 ceph-mon[74676]: pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:15:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb 02 11:15:56 compute-0 ceph-mon[74676]: osdmap e129: 3 total, 3 up, 3 in
Feb 02 11:15:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:56] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Feb 02 11:15:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:15:56] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Feb 02 11:15:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:57 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:15:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:57.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:15:57 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129 pruub=14.914955139s) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 active pruub 290.587219238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:57 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129 pruub=14.914890289s) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 290.587219238s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:15:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 665 B/s rd, 0 op/s; 23 B/s, 0 objects/s recovering
Feb 02 11:15:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:15:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:57.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:15:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Feb 02 11:15:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Feb 02 11:15:58 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Feb 02 11:15:58 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:15:58 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 11:15:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:58 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003be0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:58 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a540 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb 02 11:15:59 compute-0 ceph-mon[74676]: pgmap v107: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 665 B/s rd, 0 op/s; 23 B/s, 0 objects/s recovering
Feb 02 11:15:59 compute-0 ceph-mon[74676]: osdmap e130: 3 total, 3 up, 3 in
Feb 02 11:15:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb 02 11:15:59 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb 02 11:15:59 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:15:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:15:59 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:15:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:15:59.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:15:59 compute-0 sudo[108789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:15:59 compute-0 sudo[108789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:15:59 compute-0 sudo[108789]: pam_unix(sudo:session): session closed for user root
Feb 02 11:15:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 676 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Feb 02 11:15:59 compute-0 sudo[108814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:15:59 compute-0 sudo[108814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:15:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:15:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:15:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:15:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f3cd49d8400>)]
Feb 02 11:15:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:15:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:15:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:15:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f3ccf034e80>)]
Feb 02 11:15:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 02 11:15:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 02 11:15:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:15:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:15:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:15:59.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb 02 11:16:00 compute-0 ceph-mon[74676]: osdmap e131: 3 total, 3 up, 3 in
Feb 02 11:16:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:16:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb 02 11:16:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb 02 11:16:00 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132 pruub=14.991101265s) [2] async=[2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 43'1029 active pruub 293.394470215s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:16:00 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132 pruub=14.991000175s) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 293.394470215s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:16:00 compute-0 podman[108913]: 2026-02-02 11:16:00.088934617 +0000 UTC m=+0.069183597 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Feb 02 11:16:00 compute-0 podman[108913]: 2026-02-02 11:16:00.180179692 +0000 UTC m=+0.160428652 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:16:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:00 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:00 compute-0 podman[109051]: 2026-02-02 11:16:00.681276421 +0000 UTC m=+0.051141509 container exec 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:16:00 compute-0 podman[109051]: 2026-02-02 11:16:00.690115889 +0000 UTC m=+0.059980957 container exec_died 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:16:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:00 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003c00 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:00 compute-0 podman[109124]: 2026-02-02 11:16:00.911585186 +0000 UTC m=+0.051518539 container exec 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:16:00 compute-0 podman[109124]: 2026-02-02 11:16:00.944287946 +0000 UTC m=+0.084221279 container exec_died 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Feb 02 11:16:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb 02 11:16:01 compute-0 ceph-mon[74676]: pgmap v110: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 676 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Feb 02 11:16:01 compute-0 ceph-mon[74676]: osdmap e132: 3 total, 3 up, 3 in
Feb 02 11:16:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb 02 11:16:01 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb 02 11:16:01 compute-0 podman[109188]: 2026-02-02 11:16:01.171130394 +0000 UTC m=+0.089823967 container exec 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:16:01 compute-0 podman[109188]: 2026-02-02 11:16:01.202434374 +0000 UTC m=+0.121127887 container exec_died 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:16:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:01 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a540 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:01 compute-0 sudo[109211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:16:01 compute-0 sudo[109211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:01 compute-0 sudo[109211]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:01.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:01 compute-0 podman[109278]: 2026-02-02 11:16:01.422218114 +0000 UTC m=+0.066521182 container exec 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.28.2, release=1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, version=2.2.4, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Feb 02 11:16:01 compute-0 podman[109278]: 2026-02-02 11:16:01.477076946 +0000 UTC m=+0.121380004 container exec_died 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, distribution-scope=public, io.buildah.version=1.28.2, release=1793, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vcs-type=git, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=)
Feb 02 11:16:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 1 peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s wr, 0 op/s; 54 B/s, 1 objects/s recovering
Feb 02 11:16:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:01.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:01 compute-0 podman[109344]: 2026-02-02 11:16:01.685542068 +0000 UTC m=+0.056733827 container exec ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:16:01 compute-0 podman[109344]: 2026-02-02 11:16:01.716151878 +0000 UTC m=+0.087343617 container exec_died ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:16:01 compute-0 podman[109418]: 2026-02-02 11:16:01.923969682 +0000 UTC m=+0.056763258 container exec 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:16:02 compute-0 ceph-mon[74676]: osdmap e133: 3 total, 3 up, 3 in
Feb 02 11:16:02 compute-0 ceph-mon[74676]: pgmap v113: 353 pgs: 1 peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s wr, 0 op/s; 54 B/s, 1 objects/s recovering
Feb 02 11:16:02 compute-0 podman[109418]: 2026-02-02 11:16:02.133174734 +0000 UTC m=+0.265968290 container exec_died 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:16:02 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.dhyzzj(active, since 92s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:16:02 compute-0 podman[109526]: 2026-02-02 11:16:02.435420102 +0000 UTC m=+0.055141461 container exec 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:16:02 compute-0 podman[109526]: 2026-02-02 11:16:02.462438552 +0000 UTC m=+0.082159891 container exec_died 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:16:02 compute-0 sudo[108814]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:16:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:16:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:02 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:02 compute-0 sudo[109569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:16:02 compute-0 sudo[109569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:02 compute-0 sudo[109569]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:02 compute-0 sudo[109594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:16:02 compute-0 sudo[109594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:02 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:03 compute-0 ceph-mon[74676]: mgrmap e32: compute-0.dhyzzj(active, since 92s), standbys: compute-2.zebspe, compute-1.iybsjv
Feb 02 11:16:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:03 compute-0 sudo[109594]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:16:03 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:16:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:16:03 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:16:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:16:03 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:03 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003c20 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:16:03 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:16:03 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:16:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:16:03 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:16:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:16:03 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:16:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:03.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:03 compute-0 sudo[109651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:16:03 compute-0 sudo[109651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:03 compute-0 sudo[109651]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:03 compute-0 sudo[109676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:16:03 compute-0 sudo[109676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:03 compute-0 sudo[108542]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 1 peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 560 B/s wr, 0 op/s; 40 B/s, 0 objects/s recovering
Feb 02 11:16:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:03.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:03 compute-0 podman[109842]: 2026-02-02 11:16:03.709810314 +0000 UTC m=+0.036546378 container create a5c010af14f2fd2a1ac376712a19287028a093ef2353ddcb5ef401048227d5e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_tu, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:16:03 compute-0 systemd[1]: Started libpod-conmon-a5c010af14f2fd2a1ac376712a19287028a093ef2353ddcb5ef401048227d5e8.scope.
Feb 02 11:16:03 compute-0 podman[109842]: 2026-02-02 11:16:03.694271837 +0000 UTC m=+0.021007921 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:16:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:16:03 compute-0 sudo[109909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odzjagmzhffgpalpzecogkiuwijbozry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030963.5347044-364-75360420195154/AnsiballZ_command.py'
Feb 02 11:16:03 compute-0 sudo[109909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:03 compute-0 podman[109842]: 2026-02-02 11:16:03.807235544 +0000 UTC m=+0.133971628 container init a5c010af14f2fd2a1ac376712a19287028a093ef2353ddcb5ef401048227d5e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_tu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:16:03 compute-0 podman[109842]: 2026-02-02 11:16:03.814530489 +0000 UTC m=+0.141266543 container start a5c010af14f2fd2a1ac376712a19287028a093ef2353ddcb5ef401048227d5e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:16:03 compute-0 podman[109842]: 2026-02-02 11:16:03.823714967 +0000 UTC m=+0.150451051 container attach a5c010af14f2fd2a1ac376712a19287028a093ef2353ddcb5ef401048227d5e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_tu, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:16:03 compute-0 recursing_tu[109910]: 167 167
Feb 02 11:16:03 compute-0 systemd[1]: libpod-a5c010af14f2fd2a1ac376712a19287028a093ef2353ddcb5ef401048227d5e8.scope: Deactivated successfully.
Feb 02 11:16:03 compute-0 podman[109842]: 2026-02-02 11:16:03.842755792 +0000 UTC m=+0.169491876 container died a5c010af14f2fd2a1ac376712a19287028a093ef2353ddcb5ef401048227d5e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_tu, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fd09a09774a6ef37227162a326ef84ae2f879076355e79339517f0437b9af48-merged.mount: Deactivated successfully.
Feb 02 11:16:03 compute-0 podman[109842]: 2026-02-02 11:16:03.883150098 +0000 UTC m=+0.209886162 container remove a5c010af14f2fd2a1ac376712a19287028a093ef2353ddcb5ef401048227d5e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_tu, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:16:03 compute-0 systemd[1]: libpod-conmon-a5c010af14f2fd2a1ac376712a19287028a093ef2353ddcb5ef401048227d5e8.scope: Deactivated successfully.
Feb 02 11:16:03 compute-0 python3.9[109914]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:16:04 compute-0 podman[109937]: 2026-02-02 11:16:04.03400628 +0000 UTC m=+0.052344193 container create d776c67599733c058dd9627fabe613760b86333d819108c58f6b791a57e7e4eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_swartz, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:16:04 compute-0 systemd[1]: Started libpod-conmon-d776c67599733c058dd9627fabe613760b86333d819108c58f6b791a57e7e4eb.scope.
Feb 02 11:16:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3587af012c0a38e40fcb6952c50926e85bf98bd7005d63413306bf3008cd24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3587af012c0a38e40fcb6952c50926e85bf98bd7005d63413306bf3008cd24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3587af012c0a38e40fcb6952c50926e85bf98bd7005d63413306bf3008cd24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3587af012c0a38e40fcb6952c50926e85bf98bd7005d63413306bf3008cd24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3587af012c0a38e40fcb6952c50926e85bf98bd7005d63413306bf3008cd24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:04 compute-0 podman[109937]: 2026-02-02 11:16:04.015723346 +0000 UTC m=+0.034061299 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:16:04 compute-0 podman[109937]: 2026-02-02 11:16:04.126385656 +0000 UTC m=+0.144723579 container init d776c67599733c058dd9627fabe613760b86333d819108c58f6b791a57e7e4eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_swartz, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:16:04 compute-0 podman[109937]: 2026-02-02 11:16:04.137770576 +0000 UTC m=+0.156108479 container start d776c67599733c058dd9627fabe613760b86333d819108c58f6b791a57e7e4eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_swartz, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:16:04 compute-0 podman[109937]: 2026-02-02 11:16:04.141758948 +0000 UTC m=+0.160096851 container attach d776c67599733c058dd9627fabe613760b86333d819108c58f6b791a57e7e4eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:16:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:16:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:16:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:16:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:16:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:16:04 compute-0 ceph-mon[74676]: pgmap v114: 353 pgs: 1 peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 560 B/s wr, 0 op/s; 40 B/s, 0 objects/s recovering
Feb 02 11:16:04 compute-0 admiring_swartz[109959]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:16:04 compute-0 admiring_swartz[109959]: --> All data devices are unavailable
Feb 02 11:16:04 compute-0 systemd[1]: libpod-d776c67599733c058dd9627fabe613760b86333d819108c58f6b791a57e7e4eb.scope: Deactivated successfully.
Feb 02 11:16:04 compute-0 podman[109937]: 2026-02-02 11:16:04.454542113 +0000 UTC m=+0.472880016 container died d776c67599733c058dd9627fabe613760b86333d819108c58f6b791a57e7e4eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_swartz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 11:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b3587af012c0a38e40fcb6952c50926e85bf98bd7005d63413306bf3008cd24-merged.mount: Deactivated successfully.
Feb 02 11:16:04 compute-0 podman[109937]: 2026-02-02 11:16:04.493603461 +0000 UTC m=+0.511941364 container remove d776c67599733c058dd9627fabe613760b86333d819108c58f6b791a57e7e4eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_swartz, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 11:16:04 compute-0 systemd[1]: libpod-conmon-d776c67599733c058dd9627fabe613760b86333d819108c58f6b791a57e7e4eb.scope: Deactivated successfully.
Feb 02 11:16:04 compute-0 sudo[109676]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:04 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a540 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:04 compute-0 sudo[110113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:16:04 compute-0 sudo[110113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:04 compute-0 sudo[110113]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:04 compute-0 sudo[110138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:16:04 compute-0 sudo[110138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:04 compute-0 sudo[109909]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:04 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:05 compute-0 podman[110281]: 2026-02-02 11:16:05.018683485 +0000 UTC m=+0.035907681 container create 30e9056f9922237ca2cc04506e29a500b816a3410b88daad73e002596b3fac49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chebyshev, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:16:05 compute-0 systemd[1]: Started libpod-conmon-30e9056f9922237ca2cc04506e29a500b816a3410b88daad73e002596b3fac49.scope.
Feb 02 11:16:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:16:05 compute-0 podman[110281]: 2026-02-02 11:16:05.002357776 +0000 UTC m=+0.019581992 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:16:05 compute-0 podman[110281]: 2026-02-02 11:16:05.10562451 +0000 UTC m=+0.122848726 container init 30e9056f9922237ca2cc04506e29a500b816a3410b88daad73e002596b3fac49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:16:05 compute-0 podman[110281]: 2026-02-02 11:16:05.112232555 +0000 UTC m=+0.129456751 container start 30e9056f9922237ca2cc04506e29a500b816a3410b88daad73e002596b3fac49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chebyshev, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:16:05 compute-0 podman[110281]: 2026-02-02 11:16:05.115873438 +0000 UTC m=+0.133097634 container attach 30e9056f9922237ca2cc04506e29a500b816a3410b88daad73e002596b3fac49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:16:05 compute-0 gallant_chebyshev[110297]: 167 167
Feb 02 11:16:05 compute-0 systemd[1]: libpod-30e9056f9922237ca2cc04506e29a500b816a3410b88daad73e002596b3fac49.scope: Deactivated successfully.
Feb 02 11:16:05 compute-0 conmon[110297]: conmon 30e9056f9922237ca2cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30e9056f9922237ca2cc04506e29a500b816a3410b88daad73e002596b3fac49.scope/container/memory.events
Feb 02 11:16:05 compute-0 podman[110281]: 2026-02-02 11:16:05.118124181 +0000 UTC m=+0.135348377 container died 30e9056f9922237ca2cc04506e29a500b816a3410b88daad73e002596b3fac49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chebyshev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a9f462ddbff4d7040ea278aa2c418f7710ab6a49ac71a750633017e8045b331-merged.mount: Deactivated successfully.
Feb 02 11:16:05 compute-0 podman[110281]: 2026-02-02 11:16:05.156595643 +0000 UTC m=+0.173819839 container remove 30e9056f9922237ca2cc04506e29a500b816a3410b88daad73e002596b3fac49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:16:05 compute-0 systemd[1]: libpod-conmon-30e9056f9922237ca2cc04506e29a500b816a3410b88daad73e002596b3fac49.scope: Deactivated successfully.
Feb 02 11:16:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:05 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:05 compute-0 podman[110351]: 2026-02-02 11:16:05.286566367 +0000 UTC m=+0.041728934 container create 882373fcd590aa6e30e2a2b0eec20df75f19f27d111ae452d4743515760d7360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:16:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:05.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:05 compute-0 systemd[1]: Started libpod-conmon-882373fcd590aa6e30e2a2b0eec20df75f19f27d111ae452d4743515760d7360.scope.
Feb 02 11:16:05 compute-0 sudo[110408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoljzvsbvotlljyjfhklurnppiusaljn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030964.767751-388-261581616137007/AnsiballZ_selinux.py'
Feb 02 11:16:05 compute-0 sudo[110408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:16:05 compute-0 podman[110351]: 2026-02-02 11:16:05.270227318 +0000 UTC m=+0.025389915 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7075ad0898e95c09641db2848cda317ae08c6a859aa5d6c5d6bd8be947109a6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7075ad0898e95c09641db2848cda317ae08c6a859aa5d6c5d6bd8be947109a6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7075ad0898e95c09641db2848cda317ae08c6a859aa5d6c5d6bd8be947109a6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7075ad0898e95c09641db2848cda317ae08c6a859aa5d6c5d6bd8be947109a6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:05 compute-0 podman[110351]: 2026-02-02 11:16:05.386135617 +0000 UTC m=+0.141298204 container init 882373fcd590aa6e30e2a2b0eec20df75f19f27d111ae452d4743515760d7360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:16:05 compute-0 podman[110351]: 2026-02-02 11:16:05.394099241 +0000 UTC m=+0.149261808 container start 882373fcd590aa6e30e2a2b0eec20df75f19f27d111ae452d4743515760d7360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:16:05 compute-0 podman[110351]: 2026-02-02 11:16:05.397702652 +0000 UTC m=+0.152865229 container attach 882373fcd590aa6e30e2a2b0eec20df75f19f27d111ae452d4743515760d7360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:16:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 1 peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 475 B/s wr, 0 op/s; 34 B/s, 0 objects/s recovering
Feb 02 11:16:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:05.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:05 compute-0 python3.9[110414]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 02 11:16:05 compute-0 sudo[110408]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:05 compute-0 trusting_pare[110412]: {
Feb 02 11:16:05 compute-0 trusting_pare[110412]:     "1": [
Feb 02 11:16:05 compute-0 trusting_pare[110412]:         {
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "devices": [
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "/dev/loop3"
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             ],
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "lv_name": "ceph_lv0",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "lv_size": "21470642176",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "name": "ceph_lv0",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "tags": {
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.cluster_name": "ceph",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.crush_device_class": "",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.encrypted": "0",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.osd_id": "1",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.type": "block",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.vdo": "0",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:                 "ceph.with_tpm": "0"
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             },
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "type": "block",
Feb 02 11:16:05 compute-0 trusting_pare[110412]:             "vg_name": "ceph_vg0"
Feb 02 11:16:05 compute-0 trusting_pare[110412]:         }
Feb 02 11:16:05 compute-0 trusting_pare[110412]:     ]
Feb 02 11:16:05 compute-0 trusting_pare[110412]: }
Feb 02 11:16:05 compute-0 systemd[1]: libpod-882373fcd590aa6e30e2a2b0eec20df75f19f27d111ae452d4743515760d7360.scope: Deactivated successfully.
Feb 02 11:16:05 compute-0 podman[110351]: 2026-02-02 11:16:05.69352743 +0000 UTC m=+0.448689997 container died 882373fcd590aa6e30e2a2b0eec20df75f19f27d111ae452d4743515760d7360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pare, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7075ad0898e95c09641db2848cda317ae08c6a859aa5d6c5d6bd8be947109a6d-merged.mount: Deactivated successfully.
Feb 02 11:16:05 compute-0 podman[110351]: 2026-02-02 11:16:05.735452179 +0000 UTC m=+0.490614746 container remove 882373fcd590aa6e30e2a2b0eec20df75f19f27d111ae452d4743515760d7360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pare, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:16:05 compute-0 systemd[1]: libpod-conmon-882373fcd590aa6e30e2a2b0eec20df75f19f27d111ae452d4743515760d7360.scope: Deactivated successfully.
Feb 02 11:16:05 compute-0 sudo[110138]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:05 compute-0 sudo[110459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:16:05 compute-0 sudo[110459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:05 compute-0 sudo[110459]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:05 compute-0 sudo[110484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:16:05 compute-0 sudo[110484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:06 compute-0 podman[110608]: 2026-02-02 11:16:06.285283848 +0000 UTC m=+0.107702129 container create 325b4f1acd199a818ce9e4bc41cb725c1d7b04e9ecc46e8d9e46319be5afda01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:16:06 compute-0 podman[110608]: 2026-02-02 11:16:06.200843054 +0000 UTC m=+0.023261365 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:16:06 compute-0 systemd[1]: Started libpod-conmon-325b4f1acd199a818ce9e4bc41cb725c1d7b04e9ecc46e8d9e46319be5afda01.scope.
Feb 02 11:16:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:16:06 compute-0 sudo[110693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylknqfbupdnwimtxsigdarmdnpcjequs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030966.0805779-421-26986762498496/AnsiballZ_command.py'
Feb 02 11:16:06 compute-0 sudo[110693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:06 compute-0 podman[110608]: 2026-02-02 11:16:06.393365557 +0000 UTC m=+0.215783858 container init 325b4f1acd199a818ce9e4bc41cb725c1d7b04e9ecc46e8d9e46319be5afda01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:16:06 compute-0 podman[110608]: 2026-02-02 11:16:06.398847401 +0000 UTC m=+0.221265692 container start 325b4f1acd199a818ce9e4bc41cb725c1d7b04e9ecc46e8d9e46319be5afda01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 11:16:06 compute-0 sad_moser[110681]: 167 167
Feb 02 11:16:06 compute-0 podman[110608]: 2026-02-02 11:16:06.402128854 +0000 UTC m=+0.224547165 container attach 325b4f1acd199a818ce9e4bc41cb725c1d7b04e9ecc46e8d9e46319be5afda01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:16:06 compute-0 systemd[1]: libpod-325b4f1acd199a818ce9e4bc41cb725c1d7b04e9ecc46e8d9e46319be5afda01.scope: Deactivated successfully.
Feb 02 11:16:06 compute-0 podman[110608]: 2026-02-02 11:16:06.403017369 +0000 UTC m=+0.225435650 container died 325b4f1acd199a818ce9e4bc41cb725c1d7b04e9ecc46e8d9e46319be5afda01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-caa7caa634652f0a48082fbb64c71d1695809b9da83aa605269c2bbdd3dc3799-merged.mount: Deactivated successfully.
Feb 02 11:16:06 compute-0 podman[110608]: 2026-02-02 11:16:06.43684909 +0000 UTC m=+0.259267371 container remove 325b4f1acd199a818ce9e4bc41cb725c1d7b04e9ecc46e8d9e46319be5afda01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Feb 02 11:16:06 compute-0 systemd[1]: libpod-conmon-325b4f1acd199a818ce9e4bc41cb725c1d7b04e9ecc46e8d9e46319be5afda01.scope: Deactivated successfully.
Feb 02 11:16:06 compute-0 podman[110718]: 2026-02-02 11:16:06.559644102 +0000 UTC m=+0.038704059 container create 8a17a655ffbf930dd9e72a9f6faf657966ff896d7980f0d562e05ac7801265bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gagarin, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:16:06 compute-0 ceph-mon[74676]: pgmap v115: 353 pgs: 1 peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 475 B/s wr, 0 op/s; 34 B/s, 0 objects/s recovering
Feb 02 11:16:06 compute-0 python3.9[110695]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 02 11:16:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:06 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003c40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:06 compute-0 sudo[110693]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:06 compute-0 systemd[1]: Started libpod-conmon-8a17a655ffbf930dd9e72a9f6faf657966ff896d7980f0d562e05ac7801265bc.scope.
Feb 02 11:16:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff3056e58de4cc2825dd0bd03dbfc6f018bd720c50d6bb6832a1fd11cfde0c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff3056e58de4cc2825dd0bd03dbfc6f018bd720c50d6bb6832a1fd11cfde0c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff3056e58de4cc2825dd0bd03dbfc6f018bd720c50d6bb6832a1fd11cfde0c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff3056e58de4cc2825dd0bd03dbfc6f018bd720c50d6bb6832a1fd11cfde0c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:16:06 compute-0 podman[110718]: 2026-02-02 11:16:06.63106311 +0000 UTC m=+0.110123107 container init 8a17a655ffbf930dd9e72a9f6faf657966ff896d7980f0d562e05ac7801265bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:16:06 compute-0 podman[110718]: 2026-02-02 11:16:06.637361628 +0000 UTC m=+0.116421595 container start 8a17a655ffbf930dd9e72a9f6faf657966ff896d7980f0d562e05ac7801265bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:16:06 compute-0 podman[110718]: 2026-02-02 11:16:06.543709854 +0000 UTC m=+0.022769841 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:16:06 compute-0 podman[110718]: 2026-02-02 11:16:06.640815525 +0000 UTC m=+0.119875512 container attach 8a17a655ffbf930dd9e72a9f6faf657966ff896d7980f0d562e05ac7801265bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 11:16:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:06 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a540 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:06 compute-0 sudo[110912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehmvdghdkdqdgjoanxumcwoptvzqdgwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030966.7317038-445-149306852375310/AnsiballZ_file.py'
Feb 02 11:16:06 compute-0 sudo[110912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:06] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Feb 02 11:16:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:06] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Feb 02 11:16:07 compute-0 python3.9[110918]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:16:07 compute-0 sudo[110912]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:07 compute-0 lvm[110963]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:16:07 compute-0 lvm[110963]: VG ceph_vg0 finished
Feb 02 11:16:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:07 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:07 compute-0 focused_gagarin[110734]: {}
Feb 02 11:16:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:07.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:07 compute-0 systemd[1]: libpod-8a17a655ffbf930dd9e72a9f6faf657966ff896d7980f0d562e05ac7801265bc.scope: Deactivated successfully.
Feb 02 11:16:07 compute-0 systemd[1]: libpod-8a17a655ffbf930dd9e72a9f6faf657966ff896d7980f0d562e05ac7801265bc.scope: Consumed 1.000s CPU time.
Feb 02 11:16:07 compute-0 podman[110718]: 2026-02-02 11:16:07.345768426 +0000 UTC m=+0.824828383 container died 8a17a655ffbf930dd9e72a9f6faf657966ff896d7980f0d562e05ac7801265bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gagarin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fff3056e58de4cc2825dd0bd03dbfc6f018bd720c50d6bb6832a1fd11cfde0c4-merged.mount: Deactivated successfully.
Feb 02 11:16:07 compute-0 podman[110718]: 2026-02-02 11:16:07.391152722 +0000 UTC m=+0.870212689 container remove 8a17a655ffbf930dd9e72a9f6faf657966ff896d7980f0d562e05ac7801265bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:16:07 compute-0 systemd[1]: libpod-conmon-8a17a655ffbf930dd9e72a9f6faf657966ff896d7980f0d562e05ac7801265bc.scope: Deactivated successfully.
Feb 02 11:16:07 compute-0 sudo[110484]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:16:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:16:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 383 B/s wr, 0 op/s; 27 B/s, 0 objects/s recovering
Feb 02 11:16:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Feb 02 11:16:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Feb 02 11:16:07 compute-0 sudo[111053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:16:07 compute-0 sudo[111053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:07 compute-0 sudo[111053]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:07.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:07 compute-0 sudo[111154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xocmdiyytwivyjvwxqbgtucdydhextof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030967.407272-469-198846307175903/AnsiballZ_mount.py'
Feb 02 11:16:07 compute-0 sudo[111154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:08 compute-0 python3.9[111156]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 02 11:16:08 compute-0 sudo[111154]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:08 compute-0 sshd-session[111079]: Invalid user user from 45.148.10.121 port 43510
Feb 02 11:16:08 compute-0 sshd-session[111079]: Connection closed by invalid user user 45.148.10.121 port 43510 [preauth]
Feb 02 11:16:08 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:08 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:16:08 compute-0 ceph-mon[74676]: pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 383 B/s wr, 0 op/s; 27 B/s, 0 objects/s recovering
Feb 02 11:16:08 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Feb 02 11:16:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb 02 11:16:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb 02 11:16:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb 02 11:16:08 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb 02 11:16:08 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134 pruub=15.562667847s) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 active pruub 302.390136719s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:16:08 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134 pruub=15.562610626s) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 302.390136719s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:16:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003c60 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:09 compute-0 sudo[111309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikqbonkjkjtegpsfiulpqddrmfoycnik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030968.9029016-553-255906692061351/AnsiballZ_file.py'
Feb 02 11:16:09 compute-0 sudo[111309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:09 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003190 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:09 compute-0 python3.9[111311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:16:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:09.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:09 compute-0 sudo[111309]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb 02 11:16:09 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb 02 11:16:09 compute-0 ceph-mon[74676]: osdmap e134: 3 total, 3 up, 3 in
Feb 02 11:16:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 365 B/s rd, 0 op/s
Feb 02 11:16:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 11:16:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:16:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb 02 11:16:09 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb 02 11:16:09 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:16:09 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 11:16:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:09.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:09 compute-0 sudo[111462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxlfnauajhisglogiorwllvrtbfdpinq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030969.4969416-577-279535241978838/AnsiballZ_stat.py'
Feb 02 11:16:09 compute-0 sudo[111462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:09 compute-0 python3.9[111464]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:16:09 compute-0 sudo[111462]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:10 compute-0 sudo[111540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wypopscwgihwesfhlsjighelfkzjobfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030969.4969416-577-279535241978838/AnsiballZ_file.py'
Feb 02 11:16:10 compute-0 sudo[111540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:10 compute-0 python3.9[111542]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:16:10 compute-0 sudo[111540]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb 02 11:16:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:16:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb 02 11:16:10 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb 02 11:16:10 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136 pruub=9.064677238s) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 active pruub 297.917083740s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:16:10 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136 pruub=9.064624786s) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 297.917083740s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:16:10 compute-0 ceph-mon[74676]: pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 365 B/s rd, 0 op/s
Feb 02 11:16:10 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb 02 11:16:10 compute-0 ceph-mon[74676]: osdmap e135: 3 total, 3 up, 3 in
Feb 02 11:16:10 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:16:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb 02 11:16:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb 02 11:16:10 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb 02 11:16:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:10 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:10 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:16:10 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 11:16:10 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137 pruub=15.942566872s) [0] async=[0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 43'1029 active pruub 304.866455078s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:16:10 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137 pruub=15.942464828s) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 304.866455078s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:16:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:10 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:11 compute-0 sudo[111693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnidvwwzmliftiskmogegpzeltmjwzkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030971.0571911-640-62509660809501/AnsiballZ_stat.py'
Feb 02 11:16:11 compute-0 sudo[111693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:11 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003c60 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:11.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:11 compute-0 python3.9[111695]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:16:11 compute-0 sudo[111693]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:16:11 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 11:16:11 compute-0 ceph-mon[74676]: osdmap e136: 3 total, 3 up, 3 in
Feb 02 11:16:11 compute-0 ceph-mon[74676]: osdmap e137: 3 total, 3 up, 3 in
Feb 02 11:16:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb 02 11:16:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb 02 11:16:11 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb 02 11:16:11 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:16:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:11.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=infra.usagestats t=2026-02-02T11:16:11.98549614Z level=info msg="Usage stats are ready to report"
Feb 02 11:16:12 compute-0 sudo[111848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfihzjwwbbxgheauhdjpieaxdyuobrxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030972.133046-679-133405568440422/AnsiballZ_getent.py'
Feb 02 11:16:12 compute-0 sudo[111848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:12 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003190 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:12 compute-0 ceph-mon[74676]: pgmap v122: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:16:12 compute-0 ceph-mon[74676]: osdmap e138: 3 total, 3 up, 3 in
Feb 02 11:16:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb 02 11:16:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb 02 11:16:12 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb 02 11:16:12 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139 pruub=14.986603737s) [0] async=[0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 43'1029 active pruub 305.947998047s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:16:12 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139 pruub=14.986359596s) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 305.947998047s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:16:12 compute-0 python3.9[111850]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 02 11:16:12 compute-0 sudo[111848]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:12 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:13 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:13 compute-0 sudo[112002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaayhwmcicwdevwiusfgricqxowsmzxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030973.037433-709-244133367347244/AnsiballZ_getent.py'
Feb 02 11:16:13 compute-0 sudo[112002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:13.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:13 compute-0 python3.9[112004]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 02 11:16:13 compute-0 sudo[112002]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:16:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Feb 02 11:16:13 compute-0 ceph-mon[74676]: osdmap e139: 3 total, 3 up, 3 in
Feb 02 11:16:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Feb 02 11:16:13 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Feb 02 11:16:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:13.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:14 compute-0 sudo[112156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzsttzgihysnfhmejtjzolgppxjmtpnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030973.6573966-733-15991422831192/AnsiballZ_group.py'
Feb 02 11:16:14 compute-0 sudo[112156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:14 compute-0 python3.9[112158]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 11:16:14 compute-0 sudo[112156]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:16:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:16:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:14 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003c60 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:14 compute-0 ceph-mon[74676]: pgmap v125: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Feb 02 11:16:14 compute-0 ceph-mon[74676]: osdmap e140: 3 total, 3 up, 3 in
Feb 02 11:16:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:16:14 compute-0 sudo[112308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypsdfgqhabzevbikzstyfvmcuimnudiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030974.5257556-760-159361942341772/AnsiballZ_file.py'
Feb 02 11:16:14 compute-0 sudo[112308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:14 compute-0 python3.9[112310]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 02 11:16:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:14 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003190 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:14 compute-0 sudo[112308]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:15 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:15.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 2 objects/s recovering
Feb 02 11:16:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:15 compute-0 sudo[112462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjhpopxbwzmcxjoafxppeiybildkaza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030975.418376-793-72090872675867/AnsiballZ_dnf.py'
Feb 02 11:16:15 compute-0 sudo[112462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:15.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:15 compute-0 python3.9[112464]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:16:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:16 compute-0 ceph-mon[74676]: pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 2 objects/s recovering
Feb 02 11:16:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003c60 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:17] "GET /metrics HTTP/1.1" 200 48249 "" "Prometheus/2.51.0"
Feb 02 11:16:17 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:17] "GET /metrics HTTP/1.1" 200 48249 "" "Prometheus/2.51.0"
Feb 02 11:16:17 compute-0 ceph-mgr[74969]: [dashboard INFO request] [192.168.122.100:48698] [POST] [200] [0.149s] [4.0B] [b9a049e5-1002-49d4-87d4-ed199946506b] /api/prometheus_receiver
Feb 02 11:16:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003190 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:17.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:17 compute-0 sudo[112462]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Feb 02 11:16:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:17.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:17 compute-0 sudo[112618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixtmyxvvutabhskcrtdiyipapjeguwfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030977.5337043-817-84859670036119/AnsiballZ_file.py'
Feb 02 11:16:17 compute-0 sudo[112618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:17 compute-0 python3.9[112620]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:16:17 compute-0 sudo[112618]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:18 compute-0 sudo[112770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdrigbqwrvuvliyzaosjwhhcbcwedfqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030978.1617067-841-205441734700232/AnsiballZ_stat.py'
Feb 02 11:16:18 compute-0 sudo[112770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:18 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:18 compute-0 python3.9[112772]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:16:18 compute-0 sudo[112770]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:18 compute-0 ceph-mon[74676]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Feb 02 11:16:18 compute-0 sudo[112849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czxoccjahovzwrtfjfbflswbpfflnpya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030978.1617067-841-205441734700232/AnsiballZ_file.py'
Feb 02 11:16:18 compute-0 sudo[112849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:18 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:19 compute-0 python3.9[112851]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:16:19 compute-0 sudo[112849]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:19 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003c80 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:16:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:19.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:16:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 388 B/s rd, 0 op/s; 13 B/s, 1 objects/s recovering
Feb 02 11:16:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:19.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:19 compute-0 sudo[113002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghiyucpkxyuovbqgajhbbmxddoxzhnio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030979.56657-880-218096970364321/AnsiballZ_stat.py'
Feb 02 11:16:19 compute-0 sudo[113002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:20 compute-0 python3.9[113004]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:16:20 compute-0 sudo[113002]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:20 compute-0 sudo[113080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yefyboikfdzsixjsbjrmorcjrotumsuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030979.56657-880-218096970364321/AnsiballZ_file.py'
Feb 02 11:16:20 compute-0 sudo[113080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:20 compute-0 python3.9[113082]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:16:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:20 compute-0 sudo[113080]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:20 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003190 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:20 compute-0 ceph-mon[74676]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 388 B/s rd, 0 op/s; 13 B/s, 1 objects/s recovering
Feb 02 11:16:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:20 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:21 compute-0 sudo[113233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoeanpkwyaupuepzonoehnqtipsxsqbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030980.986713-925-158747068262215/AnsiballZ_dnf.py'
Feb 02 11:16:21 compute-0 sudo[113233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:21 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003c80 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:21 compute-0 sudo[113236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:16:21 compute-0 sudo[113236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:21 compute-0 sudo[113236]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:21.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:21 compute-0 python3.9[113235]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:16:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 345 B/s rd, 0 op/s; 12 B/s, 1 objects/s recovering
Feb 02 11:16:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:21.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:22 compute-0 ceph-mon[74676]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 345 B/s rd, 0 op/s; 12 B/s, 1 objects/s recovering
Feb 02 11:16:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003190 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:22 compute-0 sudo[113233]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:23 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:23.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Feb 02 11:16:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:23.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:23 compute-0 python3.9[113414]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:16:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:24 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003ca0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:24 compute-0 python3.9[113566]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 02 11:16:24 compute-0 ceph-mon[74676]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Feb 02 11:16:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:24 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:25 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:25.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:25 compute-0 python3.9[113717]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:16:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 258 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Feb 02 11:16:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:25.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:26 compute-0 sudo[113868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzaijepvndshsbcapbosurqtgcbebvck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030985.910407-1048-132964739468638/AnsiballZ_systemd.py'
Feb 02 11:16:26 compute-0 sudo[113868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:26 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:26 compute-0 python3.9[113870]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:16:26 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 02 11:16:26 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Feb 02 11:16:26 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 02 11:16:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:26 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:26 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 02 11:16:26 compute-0 ceph-mon[74676]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 258 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Feb 02 11:16:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:16:26.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:16:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:26] "GET /metrics HTTP/1.1" 200 48249 "" "Prometheus/2.51.0"
Feb 02 11:16:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:26] "GET /metrics HTTP/1.1" 200 48249 "" "Prometheus/2.51.0"
Feb 02 11:16:27 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 02 11:16:27 compute-0 sudo[113868]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:27 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:27.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:16:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:27.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:28 compute-0 python3.9[114033]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 02 11:16:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:28 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:28 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003ce0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:28 compute-0 ceph-mon[74676]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:16:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:29 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:29.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:16:29
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control', '.nfs', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root']
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:16:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:16:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:16:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:29.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:16:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:16:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:16:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:30 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:30 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:31 compute-0 ceph-mon[74676]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:31 compute-0 sudo[114186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exzpnbazsmwgtbhuacvduubceumqnjkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030991.0130732-1219-167667763045192/AnsiballZ_systemd.py'
Feb 02 11:16:31 compute-0 sudo[114186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003d00 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:31.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:31 compute-0 python3.9[114188]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:16:31 compute-0 sudo[114186]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:31.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:31 compute-0 sudo[114341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppzuenjkuqwipovefyxbwbczqiqsllbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030991.7307568-1219-107129150451604/AnsiballZ_systemd.py'
Feb 02 11:16:31 compute-0 sudo[114341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.046605) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030992046652, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2731, "num_deletes": 251, "total_data_size": 5340760, "memory_usage": 5455056, "flush_reason": "Manual Compaction"}
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030992084176, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 4970012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8110, "largest_seqno": 10839, "table_properties": {"data_size": 4957044, "index_size": 8374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 30369, "raw_average_key_size": 21, "raw_value_size": 4929532, "raw_average_value_size": 3551, "num_data_blocks": 365, "num_entries": 1388, "num_filter_entries": 1388, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030876, "oldest_key_time": 1770030876, "file_creation_time": 1770030992, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 37631 microseconds, and 7032 cpu microseconds.
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.084236) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 4970012 bytes OK
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.084261) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.101088) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.101138) EVENT_LOG_v1 {"time_micros": 1770030992101129, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.101161) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 5328508, prev total WAL file size 5328508, number of live WAL files 2.
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.102081) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(4853KB)], [23(12MB)]
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030992102156, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18377589, "oldest_snapshot_seqno": -1}
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4069 keys, 14301462 bytes, temperature: kUnknown
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030992194559, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14301462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14268913, "index_size": 21303, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 103957, "raw_average_key_size": 25, "raw_value_size": 14189000, "raw_average_value_size": 3487, "num_data_blocks": 912, "num_entries": 4069, "num_filter_entries": 4069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770030992, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.194951) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14301462 bytes
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.196681) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.6 rd, 154.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.7, 12.8 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(6.6) write-amplify(2.9) OK, records in: 4599, records dropped: 530 output_compression: NoCompression
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.196708) EVENT_LOG_v1 {"time_micros": 1770030992196695, "job": 8, "event": "compaction_finished", "compaction_time_micros": 92518, "compaction_time_cpu_micros": 22521, "output_level": 6, "num_output_files": 1, "total_output_size": 14301462, "num_input_records": 4599, "num_output_records": 4069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030992197297, "job": 8, "event": "table_file_deletion", "file_number": 25}
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770030992199079, "job": 8, "event": "table_file_deletion", "file_number": 23}
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.101967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.199164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.199171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.199173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.199175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:16:32 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:16:32.199177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:16:32 compute-0 python3.9[114343]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:16:32 compute-0 sudo[114341]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:32 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:32 compute-0 sshd-session[106614]: Connection closed by 192.168.122.30 port 45898
Feb 02 11:16:32 compute-0 sshd-session[106611]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:16:32 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Feb 02 11:16:32 compute-0 systemd[1]: session-39.scope: Consumed 1min 3.272s CPU time.
Feb 02 11:16:32 compute-0 systemd-logind[793]: Session 39 logged out. Waiting for processes to exit.
Feb 02 11:16:32 compute-0 systemd-logind[793]: Removed session 39.
Feb 02 11:16:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:32 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:33 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:33.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:33 compute-0 ceph-mon[74676]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:33.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:34 compute-0 ceph-mon[74676]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:34 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003d20 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:34 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:35 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:35.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:16:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:35.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:36 compute-0 ceph-mon[74676]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:16:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:36 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:36 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4003d40 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:16:36.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:16:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:36] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Feb 02 11:16:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:36] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Feb 02 11:16:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:37 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:37.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:16:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:37.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:38 compute-0 sshd-session[114376]: Accepted publickey for zuul from 192.168.122.30 port 50400 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:16:38 compute-0 systemd-logind[793]: New session 40 of user zuul.
Feb 02 11:16:38 compute-0 systemd[1]: Started Session 40 of User zuul.
Feb 02 11:16:38 compute-0 sshd-session[114376]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:16:38 compute-0 ceph-mon[74676]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:16:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:38 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:38 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6604003f50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:39 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001320 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:39.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:39 compute-0 python3.9[114532]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:16:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:39.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:40 compute-0 sudo[114687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcyvozugydqpbwisccmpthlcipfjtvhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770030999.965884-63-97906941660809/AnsiballZ_getent.py'
Feb 02 11:16:40 compute-0 sudo[114687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:40 compute-0 python3.9[114689]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 02 11:16:40 compute-0 sudo[114687]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:40 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:40 compute-0 ceph-mon[74676]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:40 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:41 compute-0 sudo[114842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqqcbcwgxgiawgnvfbilocqtobwaqxus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031000.899714-99-43120714602004/AnsiballZ_setup.py'
Feb 02 11:16:41 compute-0 sudo[114842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:41 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec001230 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111641 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:16:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:41.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:41 compute-0 sudo[114845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:16:41 compute-0 sudo[114845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:16:41 compute-0 sudo[114845]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:41 compute-0 python3.9[114844]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:16:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:41.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:41 compute-0 sudo[114842]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:42 compute-0 sudo[114952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peosblwfbydszvetucjwirfbfomhoaib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031000.899714-99-43120714602004/AnsiballZ_dnf.py'
Feb 02 11:16:42 compute-0 sudo[114952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:42 compute-0 python3.9[114954]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 02 11:16:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:42 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec002000 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:42 compute-0 ceph-mon[74676]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:42 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:43 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:43.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:43.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:43 compute-0 sudo[114952]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:44 compute-0 sudo[115107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enqqtnkpdwzdaiedqumxddvbtblwqufz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031003.9475737-141-7497451192202/AnsiballZ_dnf.py'
Feb 02 11:16:44 compute-0 sudo[115107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:44 compute-0 python3.9[115109]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:16:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:16:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:16:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:44 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec002000 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:44 compute-0 ceph-mon[74676]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:16:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:16:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:44 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec002000 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:45 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:45.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:16:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:45.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:45 compute-0 sudo[115107]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:46 compute-0 sudo[115262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfqaurfmybanblqxjehnqwmhszlcmtqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031006.040722-165-121157879812489/AnsiballZ_systemd.py'
Feb 02 11:16:46 compute-0 sudo[115262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:46 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c004660 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:46 compute-0 ceph-mon[74676]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:16:46 compute-0 python3.9[115264]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 11:16:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:46 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec002000 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:46 compute-0 sudo[115262]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:16:46.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:16:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:46] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:16:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:46] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:16:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:47 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec002000 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:47.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:16:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:47.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:47 compute-0 python3.9[115418]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:16:48 compute-0 sudo[115569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsvvjcujtnffkrfgwdwvmljpbselcxdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031007.95211-219-276910802415346/AnsiballZ_sefcontext.py'
Feb 02 11:16:48 compute-0 sudo[115569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:48 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:48 compute-0 python3.9[115571]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 02 11:16:48 compute-0 ceph-mon[74676]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:16:48 compute-0 sudo[115569]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:48 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:49 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:49.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:16:49 compute-0 python3.9[115724]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:16:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:49.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:49 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:16:50 compute-0 sudo[115881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrnuqigzkzkfxiewoazkhdqzvtzunrnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031010.0047443-273-9920431452521/AnsiballZ_dnf.py'
Feb 02 11:16:50 compute-0 sudo[115881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:50 compute-0 python3.9[115883]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:16:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:50 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001ff0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:50 compute-0 ceph-mon[74676]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:16:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:50 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4000b60 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:51 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614008dc0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:51.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Feb 02 11:16:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:51.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:51 compute-0 sudo[115881]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:52 compute-0 sudo[116036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enjtgtbnnnohazkcfsphwarmknxqzexv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031012.0815427-297-41294330207796/AnsiballZ_command.py'
Feb 02 11:16:52 compute-0 sudo[116036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:52 compute-0 python3.9[116038]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:16:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:52 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c30 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:52 compute-0 ceph-mon[74676]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Feb 02 11:16:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:52 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:16:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:52 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:16:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:52 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001ff0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:53 compute-0 sudo[116036]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:53 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:53.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Feb 02 11:16:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:53.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:53 compute-0 sudo[116325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thtdxrcxjkhudvvvuzzyksxkeaizkgfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031013.475713-321-113182320834344/AnsiballZ_file.py'
Feb 02 11:16:53 compute-0 sudo[116325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:54 compute-0 python3.9[116327]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 02 11:16:54 compute-0 sudo[116325]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:54 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614008dc0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:54 compute-0 python3.9[116477]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:16:54 compute-0 ceph-mon[74676]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Feb 02 11:16:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:54 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:55 compute-0 sudo[116630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdipenejrgbgbovebjkidxesjzaymlau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031014.9204109-369-77981745050426/AnsiballZ_dnf.py'
Feb 02 11:16:55 compute-0 sudo[116630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:55 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:55.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:55 compute-0 python3.9[116632]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:16:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:16:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:16:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:55.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:55 : epoch 69808700 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:16:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:56 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:56 compute-0 ceph-mon[74676]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:16:56 compute-0 sudo[116630]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:16:56.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:16:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:56 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614008dc0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:56] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:16:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:16:56] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:16:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:57 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618002a70 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:57.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:16:57 compute-0 sudo[116786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eblguuecmntcbbdzyryjadbftlhjtfja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031017.3297057-396-41307782277185/AnsiballZ_dnf.py'
Feb 02 11:16:57 compute-0 sudo[116786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:16:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:16:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:57.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:16:57 compute-0 python3.9[116788]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:16:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:58 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:58 compute-0 ceph-mon[74676]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:16:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:58 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:59 compute-0 sudo[116786]: pam_unix(sudo:session): session closed for user root
Feb 02 11:16:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:16:59 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:16:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:16:59.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:16:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:16:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:16:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:16:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:16:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:16:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:16:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:16:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:16:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:16:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:16:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:16:59.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:16:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:16:59 compute-0 sudo[116941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqputnaztxgwbuezahhxjdespvqukmli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031019.5878348-432-263124281464410/AnsiballZ_stat.py'
Feb 02 11:16:59 compute-0 sudo[116941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:00 compute-0 python3.9[116943]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:17:00 compute-0 sudo[116941]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:00 compute-0 sudo[117095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaelqasjtcczetpjrnvfnxpifmsfkzzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031020.192184-456-207943513758333/AnsiballZ_slurp.py'
Feb 02 11:17:00 compute-0 sudo[117095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:00 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618002a70 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:00 compute-0 python3.9[117097]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Feb 02 11:17:00 compute-0 sudo[117095]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:00 compute-0 ceph-mon[74676]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:17:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:00 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4002720 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:01 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614008de0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111701 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:17:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:01.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:01 compute-0 sudo[117123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:17:01 compute-0 sudo[117123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:01 compute-0 sudo[117123]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:17:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:01.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:01 compute-0 sshd-session[114379]: Connection closed by 192.168.122.30 port 50400
Feb 02 11:17:01 compute-0 sshd-session[114376]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:17:01 compute-0 systemd-logind[793]: Session 40 logged out. Waiting for processes to exit.
Feb 02 11:17:01 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Feb 02 11:17:01 compute-0 systemd[1]: session-40.scope: Consumed 16.980s CPU time.
Feb 02 11:17:01 compute-0 systemd-logind[793]: Removed session 40.
Feb 02 11:17:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:02 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:02 compute-0 ceph-mon[74676]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:17:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:02 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003780 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:03 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4002720 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:03.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:17:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:03.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:04 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:04 compute-0 ceph-mon[74676]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:17:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:04 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:05 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003780 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:05.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:17:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:05.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:06 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4002720 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:06 compute-0 ceph-mon[74676]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:17:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:17:06.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:17:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:17:06.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:17:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:06 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:06] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Feb 02 11:17:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:06] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Feb 02 11:17:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:07 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003c50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:07.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:07.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:07 compute-0 sudo[117157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:17:07 compute-0 sudo[117157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:07 compute-0 sshd-session[117155]: Accepted publickey for zuul from 192.168.122.30 port 34474 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:17:07 compute-0 sudo[117157]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:07 compute-0 systemd-logind[793]: New session 41 of user zuul.
Feb 02 11:17:07 compute-0 systemd[1]: Started Session 41 of User zuul.
Feb 02 11:17:07 compute-0 sshd-session[117155]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:17:07 compute-0 sudo[117183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:17:07 compute-0 sudo[117183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:08 compute-0 podman[117359]: 2026-02-02 11:17:08.284946506 +0000 UTC m=+0.071829265 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:17:08 compute-0 podman[117359]: 2026-02-02 11:17:08.417243222 +0000 UTC m=+0.204126001 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:17:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003780 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:08 compute-0 python3.9[117460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:17:08 compute-0 podman[117574]: 2026-02-02 11:17:08.830731331 +0000 UTC m=+0.041493151 container exec 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:17:08 compute-0 podman[117574]: 2026-02-02 11:17:08.839048251 +0000 UTC m=+0.049810071 container exec_died 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:17:08 compute-0 ceph-mon[74676]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:08 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:09 compute-0 podman[117669]: 2026-02-02 11:17:09.045535204 +0000 UTC m=+0.054600158 container exec 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:17:09 compute-0 podman[117669]: 2026-02-02 11:17:09.083238943 +0000 UTC m=+0.092303877 container exec_died 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:17:09 compute-0 podman[117734]: 2026-02-02 11:17:09.280659586 +0000 UTC m=+0.049033941 container exec 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:17:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:09 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:09 compute-0 podman[117754]: 2026-02-02 11:17:09.342963157 +0000 UTC m=+0.049496052 container exec_died 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:17:09 compute-0 podman[117734]: 2026-02-02 11:17:09.348025062 +0000 UTC m=+0.116399397 container exec_died 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:17:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:09.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:09 compute-0 podman[117851]: 2026-02-02 11:17:09.523186414 +0000 UTC m=+0.046962025 container exec 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, io.openshift.expose-services=, vcs-type=git, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9)
Feb 02 11:17:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:09 compute-0 podman[117851]: 2026-02-02 11:17:09.535168152 +0000 UTC m=+0.058943743 container exec_died 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.component=keepalived-container, release=1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, name=keepalived, version=2.2.4)
Feb 02 11:17:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:09.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:09 compute-0 podman[117990]: 2026-02-02 11:17:09.721499981 +0000 UTC m=+0.051044284 container exec ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:17:09 compute-0 podman[117990]: 2026-02-02 11:17:09.748107516 +0000 UTC m=+0.077651789 container exec_died ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:17:09 compute-0 podman[118067]: 2026-02-02 11:17:09.912787381 +0000 UTC m=+0.042174179 container exec 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:17:09 compute-0 python3.9[117977]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:17:10 compute-0 podman[118067]: 2026-02-02 11:17:10.104701727 +0000 UTC m=+0.234088535 container exec_died 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:17:10 compute-0 podman[118244]: 2026-02-02 11:17:10.431453738 +0000 UTC m=+0.051970099 container exec 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:17:10 compute-0 sshd-session[117426]: Received disconnect from 111.61.229.78 port 4175:11:  [preauth]
Feb 02 11:17:10 compute-0 sshd-session[117426]: Disconnected from authenticating user root 111.61.229.78 port 4175 [preauth]
Feb 02 11:17:10 compute-0 podman[118244]: 2026-02-02 11:17:10.470179964 +0000 UTC m=+0.090696305 container exec_died 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:17:10 compute-0 sudo[117183]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:17:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:17:10 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:10 compute-0 sudo[118338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:17:10 compute-0 sudo[118338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:10 compute-0 sudo[118338]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:10 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003df0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:10 compute-0 sudo[118363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:17:10 compute-0 sudo[118363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:10 compute-0 ceph-mon[74676]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:10 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:10 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:10 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003780 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:10 compute-0 python3.9[118462]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:17:11 compute-0 sudo[118363]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:17:11 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:17:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:17:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:17:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:17:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:17:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:17:11 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:17:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:17:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:17:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:17:11 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:17:11 compute-0 sudo[118518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:17:11 compute-0 sudo[118518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:11 compute-0 sudo[118518]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:11 compute-0 sudo[118543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:17:11 compute-0 sudo[118543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:11 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:11.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:11 compute-0 sshd-session[117201]: Connection closed by 192.168.122.30 port 34474
Feb 02 11:17:11 compute-0 sshd-session[117155]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:17:11 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Feb 02 11:17:11 compute-0 systemd[1]: session-41.scope: Consumed 2.027s CPU time.
Feb 02 11:17:11 compute-0 systemd-logind[793]: Session 41 logged out. Waiting for processes to exit.
Feb 02 11:17:11 compute-0 systemd-logind[793]: Removed session 41.
Feb 02 11:17:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:11 compute-0 podman[118609]: 2026-02-02 11:17:11.528427053 +0000 UTC m=+0.030988082 container create 1164153da2ee1974a7bf1e238491ced8aa83ec113cf23839c0b389028923d6e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:17:11 compute-0 systemd[1]: Started libpod-conmon-1164153da2ee1974a7bf1e238491ced8aa83ec113cf23839c0b389028923d6e7.scope.
Feb 02 11:17:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:17:11 compute-0 podman[118609]: 2026-02-02 11:17:11.600844623 +0000 UTC m=+0.103405682 container init 1164153da2ee1974a7bf1e238491ced8aa83ec113cf23839c0b389028923d6e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:17:11 compute-0 podman[118609]: 2026-02-02 11:17:11.606890573 +0000 UTC m=+0.109451602 container start 1164153da2ee1974a7bf1e238491ced8aa83ec113cf23839c0b389028923d6e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:17:11 compute-0 podman[118609]: 2026-02-02 11:17:11.609876592 +0000 UTC m=+0.112437621 container attach 1164153da2ee1974a7bf1e238491ced8aa83ec113cf23839c0b389028923d6e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:17:11 compute-0 podman[118609]: 2026-02-02 11:17:11.515535802 +0000 UTC m=+0.018096851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:17:11 compute-0 youthful_sanderson[118626]: 167 167
Feb 02 11:17:11 compute-0 systemd[1]: libpod-1164153da2ee1974a7bf1e238491ced8aa83ec113cf23839c0b389028923d6e7.scope: Deactivated successfully.
Feb 02 11:17:11 compute-0 conmon[118626]: conmon 1164153da2ee1974a7bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1164153da2ee1974a7bf1e238491ced8aa83ec113cf23839c0b389028923d6e7.scope/container/memory.events
Feb 02 11:17:11 compute-0 podman[118609]: 2026-02-02 11:17:11.623119963 +0000 UTC m=+0.125680992 container died 1164153da2ee1974a7bf1e238491ced8aa83ec113cf23839c0b389028923d6e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d0e105f0e44ac5b5ebc0538979b1ad86098a44419e72b99fef858dc09c25fa-merged.mount: Deactivated successfully.
Feb 02 11:17:11 compute-0 podman[118609]: 2026-02-02 11:17:11.664650604 +0000 UTC m=+0.167211633 container remove 1164153da2ee1974a7bf1e238491ced8aa83ec113cf23839c0b389028923d6e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:17:11 compute-0 systemd[1]: libpod-conmon-1164153da2ee1974a7bf1e238491ced8aa83ec113cf23839c0b389028923d6e7.scope: Deactivated successfully.
Feb 02 11:17:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:11.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:11 compute-0 podman[118652]: 2026-02-02 11:17:11.777337971 +0000 UTC m=+0.039972321 container create 6af842a8c94a53ae9a6761a4da13e2a0f6ef2f24d79265f5e14f9f744e2cfa56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_galileo, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:17:11 compute-0 systemd[1]: Started libpod-conmon-6af842a8c94a53ae9a6761a4da13e2a0f6ef2f24d79265f5e14f9f744e2cfa56.scope.
Feb 02 11:17:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f0eece0558a85846c20b7d35b318dc0a251ad1cd244ca448ba2a70454fa903/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f0eece0558a85846c20b7d35b318dc0a251ad1cd244ca448ba2a70454fa903/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f0eece0558a85846c20b7d35b318dc0a251ad1cd244ca448ba2a70454fa903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f0eece0558a85846c20b7d35b318dc0a251ad1cd244ca448ba2a70454fa903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f0eece0558a85846c20b7d35b318dc0a251ad1cd244ca448ba2a70454fa903/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:11 compute-0 podman[118652]: 2026-02-02 11:17:11.844197873 +0000 UTC m=+0.106832253 container init 6af842a8c94a53ae9a6761a4da13e2a0f6ef2f24d79265f5e14f9f744e2cfa56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb 02 11:17:11 compute-0 podman[118652]: 2026-02-02 11:17:11.850387277 +0000 UTC m=+0.113021627 container start 6af842a8c94a53ae9a6761a4da13e2a0f6ef2f24d79265f5e14f9f744e2cfa56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_galileo, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:17:11 compute-0 podman[118652]: 2026-02-02 11:17:11.853964732 +0000 UTC m=+0.116599112 container attach 6af842a8c94a53ae9a6761a4da13e2a0f6ef2f24d79265f5e14f9f744e2cfa56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_galileo, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 11:17:11 compute-0 podman[118652]: 2026-02-02 11:17:11.759449417 +0000 UTC m=+0.022083787 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:17:11 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:17:11 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:17:11 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:11 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:11 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:17:11 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:17:11 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:17:12 compute-0 hopeful_galileo[118669]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:17:12 compute-0 hopeful_galileo[118669]: --> All data devices are unavailable
Feb 02 11:17:12 compute-0 systemd[1]: libpod-6af842a8c94a53ae9a6761a4da13e2a0f6ef2f24d79265f5e14f9f744e2cfa56.scope: Deactivated successfully.
Feb 02 11:17:12 compute-0 conmon[118669]: conmon 6af842a8c94a53ae9a67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6af842a8c94a53ae9a6761a4da13e2a0f6ef2f24d79265f5e14f9f744e2cfa56.scope/container/memory.events
Feb 02 11:17:12 compute-0 podman[118652]: 2026-02-02 11:17:12.174178498 +0000 UTC m=+0.436812848 container died 6af842a8c94a53ae9a6761a4da13e2a0f6ef2f24d79265f5e14f9f744e2cfa56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-33f0eece0558a85846c20b7d35b318dc0a251ad1cd244ca448ba2a70454fa903-merged.mount: Deactivated successfully.
Feb 02 11:17:12 compute-0 podman[118652]: 2026-02-02 11:17:12.212025071 +0000 UTC m=+0.474659421 container remove 6af842a8c94a53ae9a6761a4da13e2a0f6ef2f24d79265f5e14f9f744e2cfa56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_galileo, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:17:12 compute-0 systemd[1]: libpod-conmon-6af842a8c94a53ae9a6761a4da13e2a0f6ef2f24d79265f5e14f9f744e2cfa56.scope: Deactivated successfully.
Feb 02 11:17:12 compute-0 sudo[118543]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:12 compute-0 sudo[118696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:17:12 compute-0 sudo[118696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:12 compute-0 sudo[118696]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:12 compute-0 sudo[118721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:17:12 compute-0 sudo[118721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:12 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:12 compute-0 podman[118786]: 2026-02-02 11:17:12.735076065 +0000 UTC m=+0.069762761 container create ec29042017b4344f549f657827c4458893bd3e9211c3500951c50b811956442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_galileo, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:17:12 compute-0 systemd[1]: Started libpod-conmon-ec29042017b4344f549f657827c4458893bd3e9211c3500951c50b811956442c.scope.
Feb 02 11:17:12 compute-0 podman[118786]: 2026-02-02 11:17:12.687058352 +0000 UTC m=+0.021745068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:17:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:17:12 compute-0 podman[118786]: 2026-02-02 11:17:12.822127392 +0000 UTC m=+0.156814108 container init ec29042017b4344f549f657827c4458893bd3e9211c3500951c50b811956442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_galileo, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:17:12 compute-0 podman[118786]: 2026-02-02 11:17:12.826769825 +0000 UTC m=+0.161456521 container start ec29042017b4344f549f657827c4458893bd3e9211c3500951c50b811956442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:17:12 compute-0 confident_galileo[118802]: 167 167
Feb 02 11:17:12 compute-0 systemd[1]: libpod-ec29042017b4344f549f657827c4458893bd3e9211c3500951c50b811956442c.scope: Deactivated successfully.
Feb 02 11:17:12 compute-0 conmon[118802]: conmon ec29042017b4344f549f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec29042017b4344f549f657827c4458893bd3e9211c3500951c50b811956442c.scope/container/memory.events
Feb 02 11:17:12 compute-0 podman[118786]: 2026-02-02 11:17:12.83222763 +0000 UTC m=+0.166914336 container attach ec29042017b4344f549f657827c4458893bd3e9211c3500951c50b811956442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:17:12 compute-0 podman[118786]: 2026-02-02 11:17:12.833060342 +0000 UTC m=+0.167747038 container died ec29042017b4344f549f657827c4458893bd3e9211c3500951c50b811956442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 11:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef36cc7ebcc77ac6925edde958ae7b4b9d6d51e89c937e29c8fee10cf9a29197-merged.mount: Deactivated successfully.
Feb 02 11:17:12 compute-0 podman[118786]: 2026-02-02 11:17:12.895895677 +0000 UTC m=+0.230582373 container remove ec29042017b4344f549f657827c4458893bd3e9211c3500951c50b811956442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_galileo, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:17:12 compute-0 systemd[1]: libpod-conmon-ec29042017b4344f549f657827c4458893bd3e9211c3500951c50b811956442c.scope: Deactivated successfully.
Feb 02 11:17:12 compute-0 ceph-mon[74676]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:12 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003e10 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:13 compute-0 podman[118828]: 2026-02-02 11:17:13.033853254 +0000 UTC m=+0.058670706 container create 5f29f6e0f48dfcbaae0cd3f922f895f34fb9057a0ac96e8a88941a8cf9810aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:17:13 compute-0 systemd[1]: Started libpod-conmon-5f29f6e0f48dfcbaae0cd3f922f895f34fb9057a0ac96e8a88941a8cf9810aa8.scope.
Feb 02 11:17:13 compute-0 podman[118828]: 2026-02-02 11:17:12.998960949 +0000 UTC m=+0.023778421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:17:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d32cfc5f68c47430cc1359e58de10c723331fddfdd66556a13ecb77fc472a9e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d32cfc5f68c47430cc1359e58de10c723331fddfdd66556a13ecb77fc472a9e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d32cfc5f68c47430cc1359e58de10c723331fddfdd66556a13ecb77fc472a9e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d32cfc5f68c47430cc1359e58de10c723331fddfdd66556a13ecb77fc472a9e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:13 compute-0 podman[118828]: 2026-02-02 11:17:13.115620561 +0000 UTC m=+0.140438043 container init 5f29f6e0f48dfcbaae0cd3f922f895f34fb9057a0ac96e8a88941a8cf9810aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:17:13 compute-0 podman[118828]: 2026-02-02 11:17:13.120664885 +0000 UTC m=+0.145482347 container start 5f29f6e0f48dfcbaae0cd3f922f895f34fb9057a0ac96e8a88941a8cf9810aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:17:13 compute-0 podman[118828]: 2026-02-02 11:17:13.125368339 +0000 UTC m=+0.150185811 container attach 5f29f6e0f48dfcbaae0cd3f922f895f34fb9057a0ac96e8a88941a8cf9810aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 11:17:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:13 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003780 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:13 compute-0 elastic_carson[118845]: {
Feb 02 11:17:13 compute-0 elastic_carson[118845]:     "1": [
Feb 02 11:17:13 compute-0 elastic_carson[118845]:         {
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "devices": [
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "/dev/loop3"
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             ],
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "lv_name": "ceph_lv0",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "lv_size": "21470642176",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "name": "ceph_lv0",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "tags": {
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.cluster_name": "ceph",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.crush_device_class": "",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.encrypted": "0",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.osd_id": "1",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.type": "block",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.vdo": "0",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:                 "ceph.with_tpm": "0"
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             },
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "type": "block",
Feb 02 11:17:13 compute-0 elastic_carson[118845]:             "vg_name": "ceph_vg0"
Feb 02 11:17:13 compute-0 elastic_carson[118845]:         }
Feb 02 11:17:13 compute-0 elastic_carson[118845]:     ]
Feb 02 11:17:13 compute-0 elastic_carson[118845]: }
Feb 02 11:17:13 compute-0 systemd[1]: libpod-5f29f6e0f48dfcbaae0cd3f922f895f34fb9057a0ac96e8a88941a8cf9810aa8.scope: Deactivated successfully.
Feb 02 11:17:13 compute-0 podman[118828]: 2026-02-02 11:17:13.401249291 +0000 UTC m=+0.426066743 container died 5f29f6e0f48dfcbaae0cd3f922f895f34fb9057a0ac96e8a88941a8cf9810aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:17:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:13.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d32cfc5f68c47430cc1359e58de10c723331fddfdd66556a13ecb77fc472a9e6-merged.mount: Deactivated successfully.
Feb 02 11:17:13 compute-0 podman[118828]: 2026-02-02 11:17:13.498576261 +0000 UTC m=+0.523393713 container remove 5f29f6e0f48dfcbaae0cd3f922f895f34fb9057a0ac96e8a88941a8cf9810aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:17:13 compute-0 systemd[1]: libpod-conmon-5f29f6e0f48dfcbaae0cd3f922f895f34fb9057a0ac96e8a88941a8cf9810aa8.scope: Deactivated successfully.
Feb 02 11:17:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:13 compute-0 sudo[118721]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:13 compute-0 sudo[118868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:17:13 compute-0 sudo[118868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:13 compute-0 sudo[118868]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:13 compute-0 sudo[118893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:17:13 compute-0 sudo[118893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:13.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:14 compute-0 podman[118959]: 2026-02-02 11:17:14.004554842 +0000 UTC m=+0.040023252 container create f62e7b6fb49f1231c76c2d09b3e89dc8dce12cc034db3aac93515d1d465f8c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:17:14 compute-0 systemd[1]: Started libpod-conmon-f62e7b6fb49f1231c76c2d09b3e89dc8dce12cc034db3aac93515d1d465f8c72.scope.
Feb 02 11:17:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:17:14 compute-0 podman[118959]: 2026-02-02 11:17:14.075879292 +0000 UTC m=+0.111347722 container init f62e7b6fb49f1231c76c2d09b3e89dc8dce12cc034db3aac93515d1d465f8c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:17:14 compute-0 podman[118959]: 2026-02-02 11:17:14.080566917 +0000 UTC m=+0.116035327 container start f62e7b6fb49f1231c76c2d09b3e89dc8dce12cc034db3aac93515d1d465f8c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:17:14 compute-0 podman[118959]: 2026-02-02 11:17:14.083768592 +0000 UTC m=+0.119237032 container attach f62e7b6fb49f1231c76c2d09b3e89dc8dce12cc034db3aac93515d1d465f8c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:17:14 compute-0 podman[118959]: 2026-02-02 11:17:13.988808845 +0000 UTC m=+0.024277275 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:17:14 compute-0 jovial_elion[118976]: 167 167
Feb 02 11:17:14 compute-0 systemd[1]: libpod-f62e7b6fb49f1231c76c2d09b3e89dc8dce12cc034db3aac93515d1d465f8c72.scope: Deactivated successfully.
Feb 02 11:17:14 compute-0 podman[118959]: 2026-02-02 11:17:14.084952023 +0000 UTC m=+0.120420433 container died f62e7b6fb49f1231c76c2d09b3e89dc8dce12cc034db3aac93515d1d465f8c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebcffa9a1d1ecca45c2491a7be95426abfcfa3d2b85963baee9136eaa13ff36e-merged.mount: Deactivated successfully.
Feb 02 11:17:14 compute-0 podman[118959]: 2026-02-02 11:17:14.118368609 +0000 UTC m=+0.153837009 container remove f62e7b6fb49f1231c76c2d09b3e89dc8dce12cc034db3aac93515d1d465f8c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb 02 11:17:14 compute-0 systemd[1]: libpod-conmon-f62e7b6fb49f1231c76c2d09b3e89dc8dce12cc034db3aac93515d1d465f8c72.scope: Deactivated successfully.
Feb 02 11:17:14 compute-0 podman[119002]: 2026-02-02 11:17:14.250065569 +0000 UTC m=+0.036894549 container create 7ac80f2c3510211cd10f5364f3eacc81d1caeb46a0f5da74a0b1c7af3659d99f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:17:14 compute-0 systemd[1]: Started libpod-conmon-7ac80f2c3510211cd10f5364f3eacc81d1caeb46a0f5da74a0b1c7af3659d99f.scope.
Feb 02 11:17:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea03b2b3f696f9e9f366293a9c5ce09817f8e467be37cd4d8c904ececc375b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea03b2b3f696f9e9f366293a9c5ce09817f8e467be37cd4d8c904ececc375b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea03b2b3f696f9e9f366293a9c5ce09817f8e467be37cd4d8c904ececc375b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea03b2b3f696f9e9f366293a9c5ce09817f8e467be37cd4d8c904ececc375b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:14 compute-0 podman[119002]: 2026-02-02 11:17:14.321188914 +0000 UTC m=+0.108017894 container init 7ac80f2c3510211cd10f5364f3eacc81d1caeb46a0f5da74a0b1c7af3659d99f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:17:14 compute-0 podman[119002]: 2026-02-02 11:17:14.325788476 +0000 UTC m=+0.112617456 container start 7ac80f2c3510211cd10f5364f3eacc81d1caeb46a0f5da74a0b1c7af3659d99f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:17:14 compute-0 podman[119002]: 2026-02-02 11:17:14.330613224 +0000 UTC m=+0.117442204 container attach 7ac80f2c3510211cd10f5364f3eacc81d1caeb46a0f5da74a0b1c7af3659d99f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:17:14 compute-0 podman[119002]: 2026-02-02 11:17:14.23575078 +0000 UTC m=+0.022579790 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:17:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:17:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:17:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:14 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:14 compute-0 lvm[119094]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:17:14 compute-0 lvm[119094]: VG ceph_vg0 finished
Feb 02 11:17:14 compute-0 sad_chebyshev[119019]: {}
Feb 02 11:17:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:14 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:14 compute-0 systemd[1]: libpod-7ac80f2c3510211cd10f5364f3eacc81d1caeb46a0f5da74a0b1c7af3659d99f.scope: Deactivated successfully.
Feb 02 11:17:14 compute-0 podman[119002]: 2026-02-02 11:17:14.990554496 +0000 UTC m=+0.777383496 container died 7ac80f2c3510211cd10f5364f3eacc81d1caeb46a0f5da74a0b1c7af3659d99f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:17:14 compute-0 ceph-mon[74676]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:17:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-bea03b2b3f696f9e9f366293a9c5ce09817f8e467be37cd4d8c904ececc375b7-merged.mount: Deactivated successfully.
Feb 02 11:17:15 compute-0 podman[119002]: 2026-02-02 11:17:15.024081005 +0000 UTC m=+0.810909985 container remove 7ac80f2c3510211cd10f5364f3eacc81d1caeb46a0f5da74a0b1c7af3659d99f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:17:15 compute-0 systemd[1]: libpod-conmon-7ac80f2c3510211cd10f5364f3eacc81d1caeb46a0f5da74a0b1c7af3659d99f.scope: Deactivated successfully.
Feb 02 11:17:15 compute-0 sudo[118893]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:17:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:17:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:15 compute-0 sudo[119108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:17:15 compute-0 sudo[119108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:15 compute-0 sudo[119108]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:15 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003e30 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:15.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:15.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:17:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003780 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:16 compute-0 sshd-session[119134]: Accepted publickey for zuul from 192.168.122.30 port 36472 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:17:16 compute-0 systemd-logind[793]: New session 42 of user zuul.
Feb 02 11:17:16 compute-0 systemd[1]: Started Session 42 of User zuul.
Feb 02 11:17:16 compute-0 sshd-session[119134]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:17:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:17:16.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:17:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:16 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:16] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Feb 02 11:17:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:16] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Feb 02 11:17:17 compute-0 ceph-mon[74676]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:17 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:17.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:17.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:17 compute-0 python3.9[119289]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:17:18 compute-0 ceph-mon[74676]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:18 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f0003e50 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:18 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003780 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:19 compute-0 python3.9[119443]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:17:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:19 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:19.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:19 compute-0 sudo[119599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeihlrjeqhpktzboidiflmjdwpowpufa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031039.350143-75-144081116637533/AnsiballZ_setup.py'
Feb 02 11:17:19 compute-0 sudo[119599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:19.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:19 compute-0 python3.9[119601]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:17:20 compute-0 sudo[119599]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:20 compute-0 sudo[119683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjfavajlcsvpifskzlcgbfizmxecrokn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031039.350143-75-144081116637533/AnsiballZ_dnf.py'
Feb 02 11:17:20 compute-0 sudo[119683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:20 compute-0 ceph-mon[74676]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:20 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:20 compute-0 python3.9[119685]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:17:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:20 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:21 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:21.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:21 compute-0 sudo[119690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:17:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:17:21 compute-0 sudo[119690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:21 compute-0 sudo[119690]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:21.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:22 compute-0 sudo[119683]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:22 compute-0 sudo[119865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjxtedhlhaxcunzloqzyeprdnuwlraiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031042.2740386-111-93522928553370/AnsiballZ_setup.py'
Feb 02 11:17:22 compute-0 sudo[119865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:22 compute-0 ceph-mon[74676]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:17:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:22 compute-0 python3.9[119867]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:17:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:22 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec001230 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:23 compute-0 sudo[119865]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:23 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0015b0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:23.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:23.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:23 compute-0 sudo[120062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfurfvoaxsbiknrwejdmxxbvnjwgbpwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031043.3300195-144-144025131521561/AnsiballZ_file.py'
Feb 02 11:17:23 compute-0 sudo[120062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:23 compute-0 python3.9[120064]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:17:23 compute-0 sudo[120062]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:24 compute-0 sudo[120214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfsqmqdccwxpcdivtcevhxwjvoypbear ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031044.1253574-168-167153347147189/AnsiballZ_command.py'
Feb 02 11:17:24 compute-0 sudo[120214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:24 compute-0 ceph-mon[74676]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:24 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:24 compute-0 python3.9[120216]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:17:24 compute-0 sudo[120214]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:24 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:25 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec002000 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:25 compute-0 sudo[120381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctdvxpbvegoliwyqtlijmrorjpayeuid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031045.0125299-192-169282184157534/AnsiballZ_stat.py'
Feb 02 11:17:25 compute-0 sudo[120381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:25.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:25 compute-0 python3.9[120383]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:17:25 compute-0 sudo[120381]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:25.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:25 compute-0 sudo[120460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcxnpchxtwnolentamqjmjglsjpjfenu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031045.0125299-192-169282184157534/AnsiballZ_file.py'
Feb 02 11:17:25 compute-0 sudo[120460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:25 compute-0 python3.9[120462]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:17:26 compute-0 sudo[120460]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:26 compute-0 sudo[120612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsndrxatftdyaenfynuxsesvukdsypji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031046.1621757-228-277440652100776/AnsiballZ_stat.py'
Feb 02 11:17:26 compute-0 sudo[120612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:26 compute-0 python3.9[120614]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:17:26 compute-0 sudo[120612]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:26 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0024e0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:26 compute-0 ceph-mon[74676]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:26 compute-0 sudo[120690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwfsguygpeslxxybmmcwpvgptlsihdgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031046.1621757-228-277440652100776/AnsiballZ_file.py'
Feb 02 11:17:26 compute-0 sudo[120690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:17:26.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:17:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:26 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:26] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Feb 02 11:17:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:26] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Feb 02 11:17:26 compute-0 python3.9[120693]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:17:27 compute-0 sudo[120690]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:27 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:27.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:27.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:27 compute-0 sudo[120844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfzxfucmiqikqqyufajakpvpxcrziqci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031047.5292895-267-239831944569088/AnsiballZ_ini_file.py'
Feb 02 11:17:27 compute-0 sudo[120844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:28 compute-0 python3.9[120846]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:17:28 compute-0 sudo[120844]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:28 compute-0 sudo[120996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfvddsluyttfkcecmskiszaihbssfrco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031048.4240108-267-193371834008493/AnsiballZ_ini_file.py'
Feb 02 11:17:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:28 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec002000 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:28 compute-0 sudo[120996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:28 compute-0 ceph-mon[74676]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:28 compute-0 python3.9[120998]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:17:28 compute-0 sudo[120996]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:28 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0024e0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:29 compute-0 sudo[121149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afbuavkvayazcaqnyttaketyzqwjjmdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031048.961872-267-38681240817436/AnsiballZ_ini_file.py'
Feb 02 11:17:29 compute-0 sudo[121149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:29 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:29 compute-0 python3.9[121151]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:17:29 compute-0 sudo[121149]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:29.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:17:29
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'vms', '.nfs', 'backups', 'cephfs.cephfs.meta', '.mgr']
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:17:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:17:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:17:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:17:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:17:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:29.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:29 compute-0 sudo[121302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwobihhexipzejncdtebduxnuqeyyolk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031049.5529637-267-203552837365972/AnsiballZ_ini_file.py'
Feb 02 11:17:29 compute-0 sudo[121302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:29 compute-0 python3.9[121304]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:17:30 compute-0 sudo[121302]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:30 compute-0 sudo[121454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fshautynrobhbfyormpvotgxabuoxqdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031050.2466564-360-55700526936932/AnsiballZ_dnf.py'
Feb 02 11:17:30 compute-0 sudo[121454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:30 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:30 compute-0 python3.9[121456]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:17:30 compute-0 ceph-mon[74676]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:30 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec002000 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:31 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0024e0 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:31.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:17:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:31.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:32 compute-0 sudo[121454]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:32 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:32 compute-0 sudo[121610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzjofsitpquujqearubjwyakmggtcsim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031052.6133244-393-273885991924748/AnsiballZ_setup.py'
Feb 02 11:17:32 compute-0 sudo[121610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:32 compute-0 ceph-mon[74676]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:17:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:32 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:33 compute-0 python3.9[121612]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:17:33 compute-0 sudo[121610]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:33 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec002000 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:33.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:33 compute-0 sudo[121765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hixuxorvggwacmmwsshcjrgheuavalgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031053.4077814-417-134080758142294/AnsiballZ_stat.py'
Feb 02 11:17:33 compute-0 sudo[121765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:33.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:33 compute-0 python3.9[121767]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:17:33 compute-0 sudo[121765]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:34 compute-0 sudo[121917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcncwobyarbfhocjlavzrosywiszbzmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031054.1559634-444-107365871709557/AnsiballZ_stat.py'
Feb 02 11:17:34 compute-0 sudo[121917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:34 compute-0 python3.9[121919]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:17:34 compute-0 sudo[121917]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:34 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003950 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:34 compute-0 ceph-mon[74676]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:34 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:35 compute-0 sudo[122070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnhswndvndldmlkdeqkfcbawbwlcwfxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031054.8918114-474-163711841069910/AnsiballZ_command.py'
Feb 02 11:17:35 compute-0 sudo[122070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:35 compute-0 python3.9[122072]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:17:35 compute-0 sudo[122070]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:35 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:35.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:35.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:36 compute-0 sudo[122224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxyxmxfbamevyidwghuooyowymyzewur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031055.8086977-504-26659858685862/AnsiballZ_service_facts.py'
Feb 02 11:17:36 compute-0 sudo[122224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:36 compute-0 python3.9[122226]: ansible-service_facts Invoked
Feb 02 11:17:36 compute-0 network[122243]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 11:17:36 compute-0 network[122244]: 'network-scripts' will be removed from distribution in near future.
Feb 02 11:17:36 compute-0 network[122245]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 11:17:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:36 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65ec002000 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:17:36.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:17:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:17:36.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:17:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:36] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Feb 02 11:17:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:36] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Feb 02 11:17:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:36 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003950 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:37 compute-0 ceph-mon[74676]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:37 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003820 fd 40 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:17:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:37.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111738 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:17:38 compute-0 kernel: ganesha.nfsd[114432]: segfault at 50 ip 00007f66a036e32e sp 00007f65fb7fd210 error 4 in libntirpc.so.5.8[7f66a0353000+2c000] likely on CPU 2 (core 0, socket 2)
Feb 02 11:17:38 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb 02 11:17:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[99151]: 02/02/2026 11:17:38 : epoch 69808700 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f661400a930 fd 40 proxy ignored for local
Feb 02 11:17:38 compute-0 systemd[1]: Started Process Core Dump (PID 122358/UID 0).
Feb 02 11:17:38 compute-0 sudo[122224]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:39 compute-0 ceph-mon[74676]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:39.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:39 compute-0 systemd-coredump[122359]: Process 99157 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 67:
                                                    #0  0x00007f66a036e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Feb 02 11:17:39 compute-0 systemd[1]: systemd-coredump@1-122358-0.service: Deactivated successfully.
Feb 02 11:17:39 compute-0 systemd[1]: systemd-coredump@1-122358-0.service: Consumed 1.027s CPU time.
Feb 02 11:17:39 compute-0 podman[122442]: 2026-02-02 11:17:39.97898515 +0000 UTC m=+0.034181277 container died 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0ab7a27578d1e241f791e54fbd23b877b9eb049f45e0e7ea8c64c5c05a4e324-merged.mount: Deactivated successfully.
Feb 02 11:17:40 compute-0 podman[122442]: 2026-02-02 11:17:40.107721062 +0000 UTC m=+0.162917169 container remove 7b6545872457eb5e64cd3c5d7e18484bacd55b7562a00fdb7dca63f234d74b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:17:40 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Main process exited, code=exited, status=139/n/a
Feb 02 11:17:40 compute-0 sudo[122581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnyizfddxrrbxvwqyvyyvlcmxzwlbfyj ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1770031059.872207-549-238378592338405/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1770031059.872207-549-238378592338405/args'
Feb 02 11:17:40 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Failed with result 'exit-code'.
Feb 02 11:17:40 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.733s CPU time.
Feb 02 11:17:40 compute-0 sudo[122581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:40 compute-0 sudo[122581]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:40 compute-0 sudo[122750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emrcwdnsoolaznpsjjtmljfhjhscxasi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031060.6805508-582-66467496405258/AnsiballZ_dnf.py'
Feb 02 11:17:40 compute-0 sudo[122750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:41 compute-0 ceph-mon[74676]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:17:41 compute-0 python3.9[122752]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:17:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:41.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:17:41 compute-0 sudo[122755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:17:41 compute-0 sudo[122755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:17:41 compute-0 sudo[122755]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:41.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:42 compute-0 sudo[122750]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:43 compute-0 ceph-mon[74676]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:17:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:43.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:17:43 compute-0 sudo[122931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luszcmxtzdgcklfnkzhysfvgicfpcysb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031063.1202047-621-110518876401291/AnsiballZ_package_facts.py'
Feb 02 11:17:43 compute-0 sudo[122931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:43.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:44 compute-0 python3.9[122933]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 02 11:17:44 compute-0 sudo[122931]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:17:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:17:44 compute-0 ceph-mon[74676]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:17:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111744 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:17:45 compute-0 sudo[123084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykozbwqxtledkhctulaojqlvslgkxrgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031065.135854-651-218882550423950/AnsiballZ_stat.py'
Feb 02 11:17:45 compute-0 sudo[123084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:17:45 compute-0 python3.9[123086]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:17:45 compute-0 sudo[123084]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:45.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:45 compute-0 sudo[123163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppsjyluogxryyczslvzfgmiyxammkdse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031065.135854-651-218882550423950/AnsiballZ_file.py'
Feb 02 11:17:45 compute-0 sudo[123163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:46 compute-0 python3.9[123165]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:17:46 compute-0 sudo[123163]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:46 compute-0 sudo[123315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnjdnuxubfmnycfzeesjndlkxmlwetil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031066.3504066-687-104991980231519/AnsiballZ_stat.py'
Feb 02 11:17:46 compute-0 sudo[123315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:46 compute-0 python3.9[123317]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:17:46 compute-0 sudo[123315]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:17:46.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:17:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:17:46.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:17:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:46] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Feb 02 11:17:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:46] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Feb 02 11:17:47 compute-0 ceph-mon[74676]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:47 compute-0 sudo[123394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qflinhwpimvqjatmbqnwtdipgkgvwegl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031066.3504066-687-104991980231519/AnsiballZ_file.py'
Feb 02 11:17:47 compute-0 sudo[123394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:47 compute-0 python3.9[123396]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:17:47 compute-0 sudo[123394]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:47.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:48 compute-0 sudo[123548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgiwlojtjzbvdpmekstmysekpwawdtaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031068.3902147-741-26881293988230/AnsiballZ_lineinfile.py'
Feb 02 11:17:48 compute-0 sudo[123548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:49 compute-0 python3.9[123550]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:17:49 compute-0 sudo[123548]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:49 compute-0 ceph-mon[74676]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:49.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:49.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:50 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Scheduled restart job, restart counter is at 2.
Feb 02 11:17:50 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:17:50 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.733s CPU time.
Feb 02 11:17:50 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:17:50 compute-0 ceph-mon[74676]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:17:50 compute-0 sudo[123730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlmaggjnyrehocneketzhqhanrxobwzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031070.2745378-786-272376625234181/AnsiballZ_setup.py'
Feb 02 11:17:50 compute-0 sudo[123730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:50 compute-0 podman[123748]: 2026-02-02 11:17:50.651765139 +0000 UTC m=+0.026619446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:17:50 compute-0 python3.9[123736]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:17:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:51 compute-0 podman[123748]: 2026-02-02 11:17:51.045250228 +0000 UTC m=+0.420104515 container create 9ce82d1f9adaa453f03c050e3896b6a190fc1dae9914225767afdb104522a02d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:17:51 compute-0 sudo[123730]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6bf822c54781dd0e3bb6315d8286cdc82afcffbf91cc2ecd26d83a002d6063/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6bf822c54781dd0e3bb6315d8286cdc82afcffbf91cc2ecd26d83a002d6063/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6bf822c54781dd0e3bb6315d8286cdc82afcffbf91cc2ecd26d83a002d6063/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6bf822c54781dd0e3bb6315d8286cdc82afcffbf91cc2ecd26d83a002d6063/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.lrvhze-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:17:51 compute-0 podman[123748]: 2026-02-02 11:17:51.322289951 +0000 UTC m=+0.697144258 container init 9ce82d1f9adaa453f03c050e3896b6a190fc1dae9914225767afdb104522a02d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:17:51 compute-0 podman[123748]: 2026-02-02 11:17:51.32601208 +0000 UTC m=+0.700866377 container start 9ce82d1f9adaa453f03c050e3896b6a190fc1dae9914225767afdb104522a02d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:17:51 compute-0 bash[123748]: 9ce82d1f9adaa453f03c050e3896b6a190fc1dae9914225767afdb104522a02d
Feb 02 11:17:51 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:17:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb 02 11:17:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb 02 11:17:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb 02 11:17:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb 02 11:17:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb 02 11:17:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb 02 11:17:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb 02 11:17:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:17:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:51.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:17:51 compute-0 sudo[123888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgnccnxjpazfmfjjeadsbxukaouajbby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031070.2745378-786-272376625234181/AnsiballZ_systemd.py'
Feb 02 11:17:51 compute-0 sudo[123888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:17:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:51.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:51 compute-0 python3.9[123890]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:17:52 compute-0 sudo[123888]: pam_unix(sudo:session): session closed for user root
Feb 02 11:17:52 compute-0 ceph-mon[74676]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:17:53 compute-0 sshd-session[119138]: Connection closed by 192.168.122.30 port 36472
Feb 02 11:17:53 compute-0 sshd-session[119134]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:17:53 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Feb 02 11:17:53 compute-0 systemd[1]: session-42.scope: Consumed 21.636s CPU time.
Feb 02 11:17:53 compute-0 systemd-logind[793]: Session 42 logged out. Waiting for processes to exit.
Feb 02 11:17:53 compute-0 systemd-logind[793]: Removed session 42.
Feb 02 11:17:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:53.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:17:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:53.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:54 compute-0 ceph-mon[74676]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:17:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:55.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:17:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:55.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:17:56 compute-0 ceph-mon[74676]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:17:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:17:56.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:17:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:56] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Feb 02 11:17:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:17:56] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:17:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:17:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:17:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:17:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:57.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:17:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:17:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:57.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:58 compute-0 ceph-mon[74676]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:17:59 compute-0 sshd-session[123924]: Accepted publickey for zuul from 192.168.122.30 port 56242 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:17:59 compute-0 systemd-logind[793]: New session 43 of user zuul.
Feb 02 11:17:59 compute-0 systemd[1]: Started Session 43 of User zuul.
Feb 02 11:17:59 compute-0 sshd-session[123924]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:17:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000027s ======
Feb 02 11:17:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:17:59.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb 02 11:17:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:17:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:17:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:17:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:17:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:17:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:17:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:17:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:17:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:17:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:17:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:17:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000026s ======
Feb 02 11:17:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:17:59.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb 02 11:17:59 compute-0 sudo[124078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhhbgyzsspofxmetpvbmvtqeyzisjhgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031079.4620738-21-196125652859991/AnsiballZ_file.py'
Feb 02 11:17:59 compute-0 sudo[124078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111800 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:18:00 compute-0 python3.9[124080]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:00 compute-0 sudo[124078]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:00 compute-0 ceph-mon[74676]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:18:00 compute-0 sudo[124230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owtfmzobrajarclydhxkxmrzzlryewum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031080.2872934-57-144359187052267/AnsiballZ_stat.py'
Feb 02 11:18:00 compute-0 sudo[124230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:00 compute-0 python3.9[124232]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:00 compute-0 sudo[124230]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:01 compute-0 sudo[124309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihhhaosncnvqpyzdgxvyqwjgjuuyvewx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031080.2872934-57-144359187052267/AnsiballZ_file.py'
Feb 02 11:18:01 compute-0 sudo[124309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:01 compute-0 python3.9[124311]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:01 compute-0 sudo[124309]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Feb 02 11:18:01 compute-0 sudo[124337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:18:01 compute-0 sudo[124337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:01 compute-0 sudo[124337]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:01 compute-0 sshd-session[123927]: Connection closed by 192.168.122.30 port 56242
Feb 02 11:18:01 compute-0 sshd-session[123924]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:18:01 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Feb 02 11:18:01 compute-0 systemd[1]: session-43.scope: Consumed 1.363s CPU time.
Feb 02 11:18:01 compute-0 systemd-logind[793]: Session 43 logged out. Waiting for processes to exit.
Feb 02 11:18:01 compute-0 systemd-logind[793]: Removed session 43.
Feb 02 11:18:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:01.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:02 compute-0 ceph-mon[74676]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000000a:nfs.cephfs.2: -2
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb 02 11:18:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:03.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb 02 11:18:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:18:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Feb 02 11:18:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:03.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:04 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:04 compute-0 ceph-mon[74676]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Feb 02 11:18:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:05 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:05 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:05.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Feb 02 11:18:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:05.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:06 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111806 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:18:06 compute-0 ceph-mon[74676]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Feb 02 11:18:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:18:06.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:18:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:18:06.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:18:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:06] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Feb 02 11:18:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:06] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Feb 02 11:18:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:07 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:07 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:07.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:18:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:07.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:08 compute-0 sshd-session[124383]: Accepted publickey for zuul from 192.168.122.30 port 54640 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:18:08 compute-0 systemd-logind[793]: New session 44 of user zuul.
Feb 02 11:18:08 compute-0 systemd[1]: Started Session 44 of User zuul.
Feb 02 11:18:08 compute-0 sshd-session[124383]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:18:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:08 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4001910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:08 compute-0 ceph-mon[74676]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:18:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:09 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:09 compute-0 python3.9[124537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:18:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:09 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c80012e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:09.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:18:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:09.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:09 compute-0 sudo[124692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygkgdzyedatofuracmgivbaocxsycdgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031089.593976-54-103641342762783/AnsiballZ_file.py'
Feb 02 11:18:09 compute-0 sudo[124692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:10 compute-0 python3.9[124694]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:10 compute-0 sudo[124692]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:10 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4001910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:10 compute-0 sudo[124867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kofmfcqnqfzvkcqwflyreywphxaeojta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031090.3470597-78-252458491331963/AnsiballZ_stat.py'
Feb 02 11:18:10 compute-0 sudo[124867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:10 compute-0 ceph-mon[74676]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:18:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:10 compute-0 python3.9[124869]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:10 compute-0 sudo[124867]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:11 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:11 compute-0 sudo[124946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzcmxjmrtzabzostpixksjjdoetxtmfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031090.3470597-78-252458491331963/AnsiballZ_file.py'
Feb 02 11:18:11 compute-0 sudo[124946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:11 compute-0 python3.9[124948]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.6ujcd_o1 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:11 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:11 compute-0 sudo[124946]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:11.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:18:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:11.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:12 compute-0 sudo[125099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkqiiparwyszqgsqyrdhsocmsichnvpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031091.8291252-138-162297949243602/AnsiballZ_stat.py'
Feb 02 11:18:12 compute-0 sudo[125099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:12 compute-0 python3.9[125101]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:12 compute-0 sudo[125099]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:12 compute-0 sudo[125177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qagenbgxkrmnqevgicthmhwuqsbqnall ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031091.8291252-138-162297949243602/AnsiballZ_file.py'
Feb 02 11:18:12 compute-0 sudo[125177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:12 compute-0 python3.9[125179]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.f60kwb3r recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:12 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8001e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:12 compute-0 sudo[125177]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:12 compute-0 ceph-mon[74676]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:18:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:13 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4001910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:13 compute-0 sudo[125330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkpsazerahmrklslplwiugembyhbgtkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031092.934505-177-131832497232605/AnsiballZ_file.py'
Feb 02 11:18:13 compute-0 sudo[125330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:13 compute-0 python3.9[125332]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:18:13 compute-0 sudo[125330]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:13 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:13.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:18:13 compute-0 sudo[125483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcchsznxswmvpdezbcugadhfzkxlbafg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031093.5450757-201-260822598997717/AnsiballZ_stat.py'
Feb 02 11:18:13 compute-0 sudo[125483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:13.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:13 compute-0 python3.9[125485]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:13 compute-0 sudo[125483]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:14 compute-0 sudo[125561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feiwwbpwarhuikbmvskktwdzbszlajkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031093.5450757-201-260822598997717/AnsiballZ_file.py'
Feb 02 11:18:14 compute-0 sudo[125561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:14 compute-0 python3.9[125563]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:18:14 compute-0 sudo[125561]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:18:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:18:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:14 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:14 compute-0 sudo[125713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjzvpdrkezfelqelkwenzoavhqdhiemf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031094.4789846-201-77253246513441/AnsiballZ_stat.py'
Feb 02 11:18:14 compute-0 sudo[125713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:14 compute-0 ceph-mon[74676]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:18:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:18:14 compute-0 python3.9[125715]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:14 compute-0 sudo[125713]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:15 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:15 compute-0 sudo[125792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyybhvgpfnetrfuwklhtrltsgnfyfgat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031094.4789846-201-77253246513441/AnsiballZ_file.py'
Feb 02 11:18:15 compute-0 sudo[125792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:15 compute-0 python3.9[125794]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:18:15 compute-0 sudo[125795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:18:15 compute-0 sudo[125795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:15 compute-0 sudo[125795]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:15 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4001910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:15 compute-0 sudo[125792]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:15 compute-0 sudo[125820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:18:15 compute-0 sudo[125820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:15.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:18:15 compute-0 sudo[126011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhnrrlibokynfkuatmgxolqfgftsbxgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031095.532939-270-146983133170940/AnsiballZ_file.py'
Feb 02 11:18:15 compute-0 sudo[126011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:15.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:15 compute-0 sudo[125820]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:18:15 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:18:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:18:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:18:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:18:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:18:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:18:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:15 compute-0 python3.9[126013]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:18:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:18:15 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:18:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:18:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:18:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:18:15 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:18:15 compute-0 sudo[126011]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:16 compute-0 sudo[126028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:18:16 compute-0 sudo[126028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:16 compute-0 sudo[126028]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:16 compute-0 sudo[126077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:18:16 compute-0 sudo[126077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:16 compute-0 sudo[126263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eddefsobtelqrdnnwjqgctowwkmwlorg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031096.131486-294-141993216735857/AnsiballZ_stat.py'
Feb 02 11:18:16 compute-0 sudo[126263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:16 compute-0 podman[126273]: 2026-02-02 11:18:16.431393551 +0000 UTC m=+0.044169006 container create 37a134384d63c3d1134218906acc60df6932c6576af0d1df27e6c7afe81b0dba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:18:16 compute-0 systemd[93326]: Created slice User Background Tasks Slice.
Feb 02 11:18:16 compute-0 systemd[1]: Started libpod-conmon-37a134384d63c3d1134218906acc60df6932c6576af0d1df27e6c7afe81b0dba.scope.
Feb 02 11:18:16 compute-0 systemd[93326]: Starting Cleanup of User's Temporary Files and Directories...
Feb 02 11:18:16 compute-0 systemd[93326]: Finished Cleanup of User's Temporary Files and Directories.
Feb 02 11:18:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:18:16 compute-0 podman[126273]: 2026-02-02 11:18:16.410709588 +0000 UTC m=+0.023484983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:18:16 compute-0 podman[126273]: 2026-02-02 11:18:16.510606261 +0000 UTC m=+0.123381666 container init 37a134384d63c3d1134218906acc60df6932c6576af0d1df27e6c7afe81b0dba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb 02 11:18:16 compute-0 podman[126273]: 2026-02-02 11:18:16.517152745 +0000 UTC m=+0.129928120 container start 37a134384d63c3d1134218906acc60df6932c6576af0d1df27e6c7afe81b0dba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 11:18:16 compute-0 podman[126273]: 2026-02-02 11:18:16.52158593 +0000 UTC m=+0.134361325 container attach 37a134384d63c3d1134218906acc60df6932c6576af0d1df27e6c7afe81b0dba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:18:16 compute-0 charming_shamir[126291]: 167 167
Feb 02 11:18:16 compute-0 systemd[1]: libpod-37a134384d63c3d1134218906acc60df6932c6576af0d1df27e6c7afe81b0dba.scope: Deactivated successfully.
Feb 02 11:18:16 compute-0 podman[126273]: 2026-02-02 11:18:16.525233723 +0000 UTC m=+0.138009088 container died 37a134384d63c3d1134218906acc60df6932c6576af0d1df27e6c7afe81b0dba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-45d41b2bc4a89c7f10eb773c580657584db336f94c7f1571bfabe9c3e57a5702-merged.mount: Deactivated successfully.
Feb 02 11:18:16 compute-0 python3.9[126271]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:16 compute-0 podman[126273]: 2026-02-02 11:18:16.570468907 +0000 UTC m=+0.183244282 container remove 37a134384d63c3d1134218906acc60df6932c6576af0d1df27e6c7afe81b0dba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:18:16 compute-0 systemd[1]: libpod-conmon-37a134384d63c3d1134218906acc60df6932c6576af0d1df27e6c7afe81b0dba.scope: Deactivated successfully.
Feb 02 11:18:16 compute-0 sudo[126263]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:16 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:16 compute-0 podman[126337]: 2026-02-02 11:18:16.676279288 +0000 UTC m=+0.019276484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:18:16 compute-0 sudo[126403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfxyrsryxfvdgrzqqhxtequhdznkajxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031096.131486-294-141993216735857/AnsiballZ_file.py'
Feb 02 11:18:16 compute-0 sudo[126403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:16 compute-0 podman[126337]: 2026-02-02 11:18:16.808926424 +0000 UTC m=+0.151923610 container create 0d8483ff4880d8a999949cd8b1e796ef08763df05cf86408618b0c67b0f2504f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tesla, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:18:16 compute-0 ceph-mon[74676]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:18:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:18:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:18:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:18:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:18:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:18:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:18:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:18:16 compute-0 systemd[1]: Started libpod-conmon-0d8483ff4880d8a999949cd8b1e796ef08763df05cf86408618b0c67b0f2504f.scope.
Feb 02 11:18:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb190514788dd6f3ee65a4ce6df9138b506358649a7ab7ed35cb990c22f3faaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb190514788dd6f3ee65a4ce6df9138b506358649a7ab7ed35cb990c22f3faaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb190514788dd6f3ee65a4ce6df9138b506358649a7ab7ed35cb990c22f3faaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb190514788dd6f3ee65a4ce6df9138b506358649a7ab7ed35cb990c22f3faaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb190514788dd6f3ee65a4ce6df9138b506358649a7ab7ed35cb990c22f3faaa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:16 compute-0 podman[126337]: 2026-02-02 11:18:16.955811422 +0000 UTC m=+0.298808638 container init 0d8483ff4880d8a999949cd8b1e796ef08763df05cf86408618b0c67b0f2504f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:18:16 compute-0 podman[126337]: 2026-02-02 11:18:16.96391994 +0000 UTC m=+0.306917126 container start 0d8483ff4880d8a999949cd8b1e796ef08763df05cf86408618b0c67b0f2504f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tesla, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:18:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:18:16.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:18:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:18:16.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:18:16 compute-0 podman[126337]: 2026-02-02 11:18:16.969789696 +0000 UTC m=+0.312786902 container attach 0d8483ff4880d8a999949cd8b1e796ef08763df05cf86408618b0c67b0f2504f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tesla, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:18:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:16] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:18:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:16] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:18:17 compute-0 python3.9[126405]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:17 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:17 compute-0 sudo[126403]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:17 compute-0 amazing_tesla[126409]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:18:17 compute-0 amazing_tesla[126409]: --> All data devices are unavailable
Feb 02 11:18:17 compute-0 systemd[1]: libpod-0d8483ff4880d8a999949cd8b1e796ef08763df05cf86408618b0c67b0f2504f.scope: Deactivated successfully.
Feb 02 11:18:17 compute-0 podman[126337]: 2026-02-02 11:18:17.301832689 +0000 UTC m=+0.644829875 container died 0d8483ff4880d8a999949cd8b1e796ef08763df05cf86408618b0c67b0f2504f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:18:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:17 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:17 compute-0 sudo[126584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqflvjsfxzlbpyunjsfmypcrzmuawxwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031097.1744611-330-14562272345776/AnsiballZ_stat.py'
Feb 02 11:18:17 compute-0 sudo[126584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb190514788dd6f3ee65a4ce6df9138b506358649a7ab7ed35cb990c22f3faaa-merged.mount: Deactivated successfully.
Feb 02 11:18:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:17.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:17 compute-0 python3.9[126586]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:17 compute-0 podman[126337]: 2026-02-02 11:18:17.627158374 +0000 UTC m=+0.970155570 container remove 0d8483ff4880d8a999949cd8b1e796ef08763df05cf86408618b0c67b0f2504f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:18:17 compute-0 systemd[1]: libpod-conmon-0d8483ff4880d8a999949cd8b1e796ef08763df05cf86408618b0c67b0f2504f.scope: Deactivated successfully.
Feb 02 11:18:17 compute-0 sudo[126584]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:17 compute-0 sudo[126077]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:17 compute-0 sudo[126597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:18:17 compute-0 sudo[126597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:17 compute-0 sudo[126597]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:17 compute-0 sudo[126639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:18:17 compute-0 sudo[126639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:17.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:17 compute-0 sudo[126714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlyqksngpxodpgrolzrptutukysmkosp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031097.1744611-330-14562272345776/AnsiballZ_file.py'
Feb 02 11:18:17 compute-0 sudo[126714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:18 compute-0 python3.9[126716]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:18 compute-0 sudo[126714]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:18 compute-0 podman[126764]: 2026-02-02 11:18:18.139880537 +0000 UTC m=+0.037309972 container create 47c7ea5783c54d3d0e2fc4c29c1cc02400cf6181a25faab0c9d23c5c5b073431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:18:18 compute-0 systemd[1]: Started libpod-conmon-47c7ea5783c54d3d0e2fc4c29c1cc02400cf6181a25faab0c9d23c5c5b073431.scope.
Feb 02 11:18:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:18:18 compute-0 podman[126764]: 2026-02-02 11:18:18.206176334 +0000 UTC m=+0.103605789 container init 47c7ea5783c54d3d0e2fc4c29c1cc02400cf6181a25faab0c9d23c5c5b073431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galois, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:18:18 compute-0 podman[126764]: 2026-02-02 11:18:18.211594727 +0000 UTC m=+0.109024162 container start 47c7ea5783c54d3d0e2fc4c29c1cc02400cf6181a25faab0c9d23c5c5b073431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb 02 11:18:18 compute-0 podman[126764]: 2026-02-02 11:18:18.215315212 +0000 UTC m=+0.112744647 container attach 47c7ea5783c54d3d0e2fc4c29c1cc02400cf6181a25faab0c9d23c5c5b073431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galois, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:18:18 compute-0 serene_galois[126799]: 167 167
Feb 02 11:18:18 compute-0 systemd[1]: libpod-47c7ea5783c54d3d0e2fc4c29c1cc02400cf6181a25faab0c9d23c5c5b073431.scope: Deactivated successfully.
Feb 02 11:18:18 compute-0 podman[126764]: 2026-02-02 11:18:18.217610076 +0000 UTC m=+0.115039511 container died 47c7ea5783c54d3d0e2fc4c29c1cc02400cf6181a25faab0c9d23c5c5b073431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galois, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:18:18 compute-0 podman[126764]: 2026-02-02 11:18:18.124043191 +0000 UTC m=+0.021472646 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:18:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ca09e8d8b726291505dea0d637a7d7fe8bb0e24380e9612ad51e2d54f0c06e-merged.mount: Deactivated successfully.
Feb 02 11:18:18 compute-0 podman[126764]: 2026-02-02 11:18:18.250817762 +0000 UTC m=+0.148247187 container remove 47c7ea5783c54d3d0e2fc4c29c1cc02400cf6181a25faab0c9d23c5c5b073431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:18:18 compute-0 systemd[1]: libpod-conmon-47c7ea5783c54d3d0e2fc4c29c1cc02400cf6181a25faab0c9d23c5c5b073431.scope: Deactivated successfully.
Feb 02 11:18:18 compute-0 podman[126874]: 2026-02-02 11:18:18.384422655 +0000 UTC m=+0.040853961 container create a828a2d5368a754826755ab3233f78ac1a0c699d626cc0380d434fc347de63ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:18:18 compute-0 systemd[1]: Started libpod-conmon-a828a2d5368a754826755ab3233f78ac1a0c699d626cc0380d434fc347de63ce.scope.
Feb 02 11:18:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:18:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b032e938d60b80afeb592f0cfc391ecc977d9eddcf08972efad8d242c38d9f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b032e938d60b80afeb592f0cfc391ecc977d9eddcf08972efad8d242c38d9f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b032e938d60b80afeb592f0cfc391ecc977d9eddcf08972efad8d242c38d9f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b032e938d60b80afeb592f0cfc391ecc977d9eddcf08972efad8d242c38d9f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:18 compute-0 podman[126874]: 2026-02-02 11:18:18.449832478 +0000 UTC m=+0.106263814 container init a828a2d5368a754826755ab3233f78ac1a0c699d626cc0380d434fc347de63ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:18:18 compute-0 podman[126874]: 2026-02-02 11:18:18.455412795 +0000 UTC m=+0.111844101 container start a828a2d5368a754826755ab3233f78ac1a0c699d626cc0380d434fc347de63ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:18:18 compute-0 podman[126874]: 2026-02-02 11:18:18.458714588 +0000 UTC m=+0.115145894 container attach a828a2d5368a754826755ab3233f78ac1a0c699d626cc0380d434fc347de63ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:18:18 compute-0 podman[126874]: 2026-02-02 11:18:18.36646986 +0000 UTC m=+0.022901166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:18:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:18 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:18 compute-0 sweet_hertz[126891]: {
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:     "1": [
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:         {
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "devices": [
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "/dev/loop3"
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             ],
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "lv_name": "ceph_lv0",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "lv_size": "21470642176",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "name": "ceph_lv0",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "tags": {
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.cluster_name": "ceph",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.crush_device_class": "",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.encrypted": "0",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.osd_id": "1",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.type": "block",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.vdo": "0",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:                 "ceph.with_tpm": "0"
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             },
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "type": "block",
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:             "vg_name": "ceph_vg0"
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:         }
Feb 02 11:18:18 compute-0 sweet_hertz[126891]:     ]
Feb 02 11:18:18 compute-0 sweet_hertz[126891]: }
Feb 02 11:18:18 compute-0 systemd[1]: libpod-a828a2d5368a754826755ab3233f78ac1a0c699d626cc0380d434fc347de63ce.scope: Deactivated successfully.
Feb 02 11:18:18 compute-0 podman[126874]: 2026-02-02 11:18:18.7513078 +0000 UTC m=+0.407739116 container died a828a2d5368a754826755ab3233f78ac1a0c699d626cc0380d434fc347de63ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hertz, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:18:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b032e938d60b80afeb592f0cfc391ecc977d9eddcf08972efad8d242c38d9f2-merged.mount: Deactivated successfully.
Feb 02 11:18:18 compute-0 podman[126874]: 2026-02-02 11:18:18.79710124 +0000 UTC m=+0.453532546 container remove a828a2d5368a754826755ab3233f78ac1a0c699d626cc0380d434fc347de63ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:18:18 compute-0 systemd[1]: libpod-conmon-a828a2d5368a754826755ab3233f78ac1a0c699d626cc0380d434fc347de63ce.scope: Deactivated successfully.
Feb 02 11:18:18 compute-0 sudo[126985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtfvjqzqqglneylurdgbymgglqrzzkwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031098.2213268-366-85380503568848/AnsiballZ_systemd.py'
Feb 02 11:18:18 compute-0 sudo[126985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:18 compute-0 sudo[126639]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:18 compute-0 sudo[126989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:18:18 compute-0 sudo[126989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:18 compute-0 sudo[126989]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:18 compute-0 ceph-mon[74676]: pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:18 compute-0 sudo[127014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:18:18 compute-0 sudo[127014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:19 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:19 compute-0 python3.9[126988]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:18:19 compute-0 systemd[1]: Reloading.
Feb 02 11:18:19 compute-0 systemd-rc-local-generator[127096]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:18:19 compute-0 systemd-sysv-generator[127099]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:18:19 compute-0 podman[127113]: 2026-02-02 11:18:19.297557358 +0000 UTC m=+0.041908142 container create fb248afdc5e6a98680d370cd07c357ebf3a7b8b98593bed54e0067b5294e1265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb 02 11:18:19 compute-0 podman[127113]: 2026-02-02 11:18:19.280000513 +0000 UTC m=+0.024351317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:18:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:19 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:19 compute-0 systemd[1]: Started libpod-conmon-fb248afdc5e6a98680d370cd07c357ebf3a7b8b98593bed54e0067b5294e1265.scope.
Feb 02 11:18:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:18:19 compute-0 podman[127113]: 2026-02-02 11:18:19.453399078 +0000 UTC m=+0.197749892 container init fb248afdc5e6a98680d370cd07c357ebf3a7b8b98593bed54e0067b5294e1265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:18:19 compute-0 podman[127113]: 2026-02-02 11:18:19.461063884 +0000 UTC m=+0.205414668 container start fb248afdc5e6a98680d370cd07c357ebf3a7b8b98593bed54e0067b5294e1265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_shirley, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:18:19 compute-0 podman[127113]: 2026-02-02 11:18:19.464254744 +0000 UTC m=+0.208605528 container attach fb248afdc5e6a98680d370cd07c357ebf3a7b8b98593bed54e0067b5294e1265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:18:19 compute-0 gallant_shirley[127128]: 167 167
Feb 02 11:18:19 compute-0 systemd[1]: libpod-fb248afdc5e6a98680d370cd07c357ebf3a7b8b98593bed54e0067b5294e1265.scope: Deactivated successfully.
Feb 02 11:18:19 compute-0 podman[127113]: 2026-02-02 11:18:19.468313808 +0000 UTC m=+0.212664602 container died fb248afdc5e6a98680d370cd07c357ebf3a7b8b98593bed54e0067b5294e1265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Feb 02 11:18:19 compute-0 sudo[126985]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-38c0be3811e2026659d30c2e2b73abf868230e13271ae9487e0bbd585de2da3e-merged.mount: Deactivated successfully.
Feb 02 11:18:19 compute-0 podman[127113]: 2026-02-02 11:18:19.517319689 +0000 UTC m=+0.261670463 container remove fb248afdc5e6a98680d370cd07c357ebf3a7b8b98593bed54e0067b5294e1265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:18:19 compute-0 systemd[1]: libpod-conmon-fb248afdc5e6a98680d370cd07c357ebf3a7b8b98593bed54e0067b5294e1265.scope: Deactivated successfully.
Feb 02 11:18:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:18:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:19.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:18:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:19 compute-0 podman[127182]: 2026-02-02 11:18:19.659340469 +0000 UTC m=+0.054000042 container create a1b0473be24ed4e6fbb2a55d1897f66d6556c59fe1a7618e31db5d41613cae4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:18:19 compute-0 systemd[1]: Started libpod-conmon-a1b0473be24ed4e6fbb2a55d1897f66d6556c59fe1a7618e31db5d41613cae4e.scope.
Feb 02 11:18:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b803e0aa9f2ef26c33349b03e950cfc09c6d8d093f79d063beec97152176f7e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b803e0aa9f2ef26c33349b03e950cfc09c6d8d093f79d063beec97152176f7e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b803e0aa9f2ef26c33349b03e950cfc09c6d8d093f79d063beec97152176f7e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b803e0aa9f2ef26c33349b03e950cfc09c6d8d093f79d063beec97152176f7e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:18:19 compute-0 podman[127182]: 2026-02-02 11:18:19.631657479 +0000 UTC m=+0.026317072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:18:19 compute-0 podman[127182]: 2026-02-02 11:18:19.743527471 +0000 UTC m=+0.138187064 container init a1b0473be24ed4e6fbb2a55d1897f66d6556c59fe1a7618e31db5d41613cae4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb 02 11:18:19 compute-0 podman[127182]: 2026-02-02 11:18:19.750210949 +0000 UTC m=+0.144870522 container start a1b0473be24ed4e6fbb2a55d1897f66d6556c59fe1a7618e31db5d41613cae4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb 02 11:18:19 compute-0 podman[127182]: 2026-02-02 11:18:19.782170649 +0000 UTC m=+0.176830222 container attach a1b0473be24ed4e6fbb2a55d1897f66d6556c59fe1a7618e31db5d41613cae4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:18:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:19.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:19 compute-0 sudo[127328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cszqusrgyuhkvuzwltqioijdjiajxmyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031099.630018-390-194107044258313/AnsiballZ_stat.py'
Feb 02 11:18:19 compute-0 sudo[127328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111820 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:18:20 compute-0 python3.9[127330]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:20 compute-0 sudo[127328]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:20 compute-0 sudo[127474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdlfswkjvfoafaxxejyrjokgylfqrnyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031099.630018-390-194107044258313/AnsiballZ_file.py'
Feb 02 11:18:20 compute-0 sudo[127474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:20 compute-0 lvm[127478]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:18:20 compute-0 lvm[127478]: VG ceph_vg0 finished
Feb 02 11:18:20 compute-0 serene_cannon[127250]: {}
Feb 02 11:18:20 compute-0 systemd[1]: libpod-a1b0473be24ed4e6fbb2a55d1897f66d6556c59fe1a7618e31db5d41613cae4e.scope: Deactivated successfully.
Feb 02 11:18:20 compute-0 systemd[1]: libpod-a1b0473be24ed4e6fbb2a55d1897f66d6556c59fe1a7618e31db5d41613cae4e.scope: Consumed 1.018s CPU time.
Feb 02 11:18:20 compute-0 podman[127182]: 2026-02-02 11:18:20.517569984 +0000 UTC m=+0.912229557 container died a1b0473be24ed4e6fbb2a55d1897f66d6556c59fe1a7618e31db5d41613cae4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:18:20 compute-0 python3.9[127476]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b803e0aa9f2ef26c33349b03e950cfc09c6d8d093f79d063beec97152176f7e0-merged.mount: Deactivated successfully.
Feb 02 11:18:20 compute-0 sudo[127474]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:20 compute-0 podman[127182]: 2026-02-02 11:18:20.568567521 +0000 UTC m=+0.963227094 container remove a1b0473be24ed4e6fbb2a55d1897f66d6556c59fe1a7618e31db5d41613cae4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:18:20 compute-0 systemd[1]: libpod-conmon-a1b0473be24ed4e6fbb2a55d1897f66d6556c59fe1a7618e31db5d41613cae4e.scope: Deactivated successfully.
Feb 02 11:18:20 compute-0 sudo[127014]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:18:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:18:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:18:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:18:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:20 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:20 compute-0 sudo[127519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:18:20 compute-0 sudo[127519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:20 compute-0 sudo[127519]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:20 compute-0 ceph-mon[74676]: pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:20 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:18:20 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:18:20 compute-0 sudo[127670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knelamuwqeaefmfzxwlmshnxelvjvtgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031100.6990275-426-254012007682341/AnsiballZ_stat.py'
Feb 02 11:18:20 compute-0 sudo[127670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:21 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:21 compute-0 python3.9[127672]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:21 compute-0 sudo[127670]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:21 compute-0 sudo[127748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryjnhchrwpokexqrzvzgeomlyrefczgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031100.6990275-426-254012007682341/AnsiballZ_file.py'
Feb 02 11:18:21 compute-0 sudo[127748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:21 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:21 compute-0 python3.9[127750]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:18:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:21.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:18:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:21 compute-0 sudo[127748]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:21 compute-0 sudo[127780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:18:21 compute-0 sudo[127780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:21 compute-0 sudo[127780]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:21.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:21 compute-0 sudo[127926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vchnbujaekxmctktudkshyfepymbwokp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031101.7075076-462-132989921336292/AnsiballZ_systemd.py'
Feb 02 11:18:21 compute-0 sudo[127926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:22 compute-0 python3.9[127928]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:18:22 compute-0 systemd[1]: Reloading.
Feb 02 11:18:22 compute-0 systemd-rc-local-generator[127950]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:18:22 compute-0 systemd-sysv-generator[127956]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:18:22 compute-0 systemd[1]: Starting Create netns directory...
Feb 02 11:18:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 02 11:18:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 02 11:18:22 compute-0 systemd[1]: Finished Create netns directory.
Feb 02 11:18:22 compute-0 sudo[127926]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:22 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:22 compute-0 ceph-mon[74676]: pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:23 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:23 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:23 compute-0 python3.9[128120]: ansible-ansible.builtin.service_facts Invoked
Feb 02 11:18:23 compute-0 network[128137]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 11:18:23 compute-0 network[128138]: 'network-scripts' will be removed from distribution in near future.
Feb 02 11:18:23 compute-0 network[128139]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 11:18:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:23.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:23.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:24 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:24 compute-0 ceph-mon[74676]: pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:25 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:25 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:25.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:18:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:25.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:26 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:26 compute-0 ceph-mon[74676]: pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:18:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:18:26.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:18:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:26] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:18:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:26] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:18:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:27 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:27 compute-0 sudo[128403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsygjtljebhcbhunwnguxkzatbpdaplm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031107.1607144-540-117230343984064/AnsiballZ_stat.py'
Feb 02 11:18:27 compute-0 sudo[128403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:27 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:27.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:18:27 compute-0 python3.9[128405]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:27 compute-0 sudo[128403]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:27 compute-0 sudo[128482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuuvidmgotnrmvohdzhntnhoqtryxiqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031107.1607144-540-117230343984064/AnsiballZ_file.py'
Feb 02 11:18:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:27.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:27 compute-0 sudo[128482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:28 compute-0 python3.9[128484]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:28 compute-0 sudo[128482]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:28 compute-0 sudo[128634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzaytrfnshdpfowotsclyltsyzunxwul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031108.2919035-579-179801144990248/AnsiballZ_file.py'
Feb 02 11:18:28 compute-0 sudo[128634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:28 compute-0 python3.9[128636]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:28 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:28 compute-0 sudo[128634]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:29 compute-0 ceph-mon[74676]: pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:18:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:29 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:29 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:18:29 compute-0 sudo[128787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aycybzuwqydtzfkllbztjfaipogdmwit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031109.0119038-603-139821618253978/AnsiballZ_stat.py'
Feb 02 11:18:29 compute-0 sudo[128787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:29 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:29 compute-0 python3.9[128789]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:29 compute-0 sudo[128787]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:18:29
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.control', 'images', 'default.rgw.meta', '.nfs', 'volumes', 'default.rgw.log', 'vms', 'cephfs.cephfs.data']
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:18:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:29.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:18:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:18:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:18:29 compute-0 sudo[128866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evqmchbrjhedvpzhzxywuvhqmoujtyvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031109.0119038-603-139821618253978/AnsiballZ_file.py'
Feb 02 11:18:29 compute-0 sudo[128866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:18:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:18:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:29.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:29 compute-0 python3.9[128868]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:29 compute-0 sudo[128866]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:18:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:30 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:30 compute-0 sudo[129019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sinfoeqsxyqychwslkrnaeyndvkjueik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031110.5458164-648-13685165851438/AnsiballZ_timezone.py'
Feb 02 11:18:30 compute-0 sudo[129019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:31 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:31 compute-0 python3.9[129021]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 02 11:18:31 compute-0 systemd[1]: Starting Time & Date Service...
Feb 02 11:18:31 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:18:31 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:18:31 compute-0 systemd[1]: Started Time & Date Service.
Feb 02 11:18:31 compute-0 ceph-mon[74676]: pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:18:31 compute-0 sudo[129019]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:31 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:31.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Feb 02 11:18:31 compute-0 sudo[129177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsdnurvlegtzcdgbxsdihhivpzyybzuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031111.517909-675-177702649241048/AnsiballZ_file.py'
Feb 02 11:18:31 compute-0 sudo[129177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:31.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:31 compute-0 python3.9[129179]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:31 compute-0 sudo[129177]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:32 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:18:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:32 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:18:32 compute-0 ceph-mon[74676]: pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Feb 02 11:18:32 compute-0 sudo[129329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzasozgxpsboznppqwmybapizcuqipas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031112.124959-699-215759765620723/AnsiballZ_stat.py'
Feb 02 11:18:32 compute-0 sudo[129329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:32 compute-0 python3.9[129331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:32 compute-0 sudo[129329]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:32 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:32 compute-0 sudo[129407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skwhkdplcxfrskcueimwnyznfrksgmjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031112.124959-699-215759765620723/AnsiballZ_file.py'
Feb 02 11:18:32 compute-0 sudo[129407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:32 compute-0 python3.9[129409]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:32 compute-0 sudo[129407]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:33 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:33 compute-0 sudo[129560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtlidaatuzeximfmtegojidhkcywpyia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031113.099482-735-85351348761237/AnsiballZ_stat.py'
Feb 02 11:18:33 compute-0 sudo[129560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:33 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:33 compute-0 python3.9[129562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:33 compute-0 sudo[129560]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:33.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Feb 02 11:18:33 compute-0 sudo[129639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofdiriuycdbzsepqompdssccurscoexo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031113.099482-735-85351348761237/AnsiballZ_file.py'
Feb 02 11:18:33 compute-0 sudo[129639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:33.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:33 compute-0 python3.9[129641]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yo6ci0ux recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:33 compute-0 sudo[129639]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:34 compute-0 sudo[129791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyiqkzwmgwfnjmhalrdixtqzylnuvpfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031114.100289-771-69125205615659/AnsiballZ_stat.py'
Feb 02 11:18:34 compute-0 sudo[129791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:34 compute-0 python3.9[129793]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:34 compute-0 sudo[129791]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:34 compute-0 ceph-mon[74676]: pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Feb 02 11:18:34 compute-0 sudo[129869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyznirzfwegcblwaumweuubfspbkmula ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031114.100289-771-69125205615659/AnsiballZ_file.py'
Feb 02 11:18:34 compute-0 sudo[129869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:34 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:34 compute-0 python3.9[129871]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:34 compute-0 sudo[129869]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:35 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:35 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:18:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:35 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:35 compute-0 sudo[130023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpbsbyqinrqlqxmjqhrikmjywvlzmyfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031115.1835353-810-176083747426423/AnsiballZ_command.py'
Feb 02 11:18:35 compute-0 sudo[130023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:35.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:18:35 compute-0 python3.9[130025]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:18:35 compute-0 sudo[130023]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:35.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:36 compute-0 sudo[130176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aclsulkgdnyowngwzkxviicppqbxwasv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770031115.9040508-834-168485156787781/AnsiballZ_edpm_nftables_from_files.py'
Feb 02 11:18:36 compute-0 sudo[130176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:36 compute-0 python3[130178]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 02 11:18:36 compute-0 sudo[130176]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:36 compute-0 ceph-mon[74676]: pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:18:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:36 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:36 compute-0 sudo[130331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvhpynlcratptiwhckeipljxygsjeuho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031116.6099067-858-23647288401423/AnsiballZ_stat.py'
Feb 02 11:18:36 compute-0 sudo[130331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:18:36.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:18:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:36] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Feb 02 11:18:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:36] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Feb 02 11:18:37 compute-0 python3.9[130333]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:37 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:37 compute-0 sudo[130331]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:37 compute-0 sudo[130409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvxkceesadxcqsdabgwfakvktgvajygz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031116.6099067-858-23647288401423/AnsiballZ_file.py'
Feb 02 11:18:37 compute-0 sudo[130409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:37 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:37 compute-0 python3.9[130411]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:37 compute-0 sudo[130409]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:37.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:18:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:37.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:37 compute-0 sudo[130562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymjkwhzkqfilpjyoxefptpefsrnrpgbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031117.653346-894-32356026188754/AnsiballZ_stat.py'
Feb 02 11:18:37 compute-0 sudo[130562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:38 compute-0 python3.9[130564]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:38 compute-0 sudo[130562]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:38 compute-0 sudo[130687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpjlgdztmzqfiekducqtgqndncapmuxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031117.653346-894-32356026188754/AnsiballZ_copy.py'
Feb 02 11:18:38 compute-0 sudo[130687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:38 compute-0 ceph-mon[74676]: pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:18:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:38 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:38 compute-0 python3.9[130689]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031117.653346-894-32356026188754/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:38 compute-0 sudo[130687]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:39 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:39 compute-0 sudo[130840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sswcbuqsurfqtvbwafhsalxggxydervw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031118.949071-939-225108710381646/AnsiballZ_stat.py'
Feb 02 11:18:39 compute-0 sudo[130840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:39 compute-0 python3.9[130842]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:39 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:39 compute-0 sudo[130840]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:18:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000056s ======
Feb 02 11:18:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:39.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Feb 02 11:18:39 compute-0 sudo[130919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhdcajbhwmxgctwxoqtdekedpvsrmrhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031118.949071-939-225108710381646/AnsiballZ_file.py'
Feb 02 11:18:39 compute-0 sudo[130919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:39 compute-0 python3.9[130921]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:39 compute-0 sudo[130919]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:39.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:40 compute-0 sudo[131071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmnvrzwtjgbcbwqrkuhcvyosjctdoosg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031120.013209-975-216963307100130/AnsiballZ_stat.py'
Feb 02 11:18:40 compute-0 sudo[131071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:40 compute-0 python3.9[131073]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:40 compute-0 sudo[131071]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:40 compute-0 sudo[131149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kijzhbzujhwvjnwbvyhmrvcfuunppsbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031120.013209-975-216963307100130/AnsiballZ_file.py'
Feb 02 11:18:40 compute-0 sudo[131149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:40 compute-0 ceph-mon[74676]: pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:18:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:40 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:40 compute-0 python3.9[131151]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:40 compute-0 sudo[131149]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:41 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:41 compute-0 sudo[131302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwrboqaxkqhocnrzrlvoiltysuwshsva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031121.1014218-1011-275701676701083/AnsiballZ_stat.py'
Feb 02 11:18:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:41 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:41 compute-0 sudo[131302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:18:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:41.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:41 compute-0 python3.9[131304]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:41 compute-0 sudo[131302]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:41 compute-0 sudo[131355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:18:41 compute-0 sudo[131355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:18:41 compute-0 sudo[131355]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:41 compute-0 sudo[131404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkfpesynuzrjseqgwslzaicmbbasojtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031121.1014218-1011-275701676701083/AnsiballZ_file.py'
Feb 02 11:18:41 compute-0 sudo[131404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:41.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:41 compute-0 python3.9[131408]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:42 compute-0 sudo[131404]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111842 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:18:42 compute-0 sudo[131558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlfwlrmwdvphfwekwbiewywjjdoaiktg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031122.2493408-1050-95590154131394/AnsiballZ_command.py'
Feb 02 11:18:42 compute-0 sudo[131558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:42 compute-0 python3.9[131560]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:18:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:42 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8001e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:42 compute-0 ceph-mon[74676]: pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:18:42 compute-0 sudo[131558]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:43 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:43 compute-0 sudo[131714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lixwjwhuhnyezctiwaprokglgyawabpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031122.9219549-1074-260742279806389/AnsiballZ_blockinfile.py'
Feb 02 11:18:43 compute-0 sudo[131714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:43 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:43 compute-0 python3.9[131716]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:43 compute-0 sudo[131714]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:18:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:43.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:43.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:44 compute-0 sudo[131867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iegabmujzpwenelyacgbumpiniamkpjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031123.800388-1101-212346249893540/AnsiballZ_file.py'
Feb 02 11:18:44 compute-0 sudo[131867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:44 compute-0 python3.9[131869]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:44 compute-0 sudo[131867]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:18:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:18:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:44 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:44 compute-0 ceph-mon[74676]: pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:18:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:18:44 compute-0 sudo[132020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkkpfhgravaxifkfpfmrmywbayizzhkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031124.3936996-1101-129738133538905/AnsiballZ_file.py'
Feb 02 11:18:44 compute-0 sudo[132020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:45 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8001e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:45 compute-0 python3.9[132022]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:45 compute-0 sudo[132020]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:45 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:18:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:45.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:45 compute-0 sudo[132173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpnapnwdhabrfijklxbacdqcvkpsgfjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031125.374019-1146-151756777753981/AnsiballZ_mount.py'
Feb 02 11:18:45 compute-0 sudo[132173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:45.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:45 compute-0 python3.9[132175]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 02 11:18:46 compute-0 sudo[132173]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:46 compute-0 sudo[132325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlrdsurannsusjzqhulmzmhtundjgabl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031126.1518817-1146-85652303628715/AnsiballZ_mount.py'
Feb 02 11:18:46 compute-0 sudo[132325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:46 compute-0 python3.9[132327]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 02 11:18:46 compute-0 sudo[132325]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:46 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:46 compute-0 ceph-mon[74676]: pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:18:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:18:46.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:18:46 compute-0 sshd-session[124386]: Connection closed by 192.168.122.30 port 54640
Feb 02 11:18:46 compute-0 sshd-session[124383]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:18:46 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Feb 02 11:18:46 compute-0 systemd[1]: session-44.scope: Consumed 25.088s CPU time.
Feb 02 11:18:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:46] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Feb 02 11:18:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:46] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Feb 02 11:18:46 compute-0 systemd-logind[793]: Session 44 logged out. Waiting for processes to exit.
Feb 02 11:18:46 compute-0 systemd-logind[793]: Removed session 44.
Feb 02 11:18:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:47 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:47 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8001e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:18:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:47.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:47.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:48 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:48 compute-0 ceph-mon[74676]: pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:18:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:49 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:49 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:18:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:49.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:49.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:50 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e80091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:50 compute-0 ceph-mon[74676]: pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:18:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:18:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:51.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:18:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:51.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:18:52 compute-0 sshd-session[132358]: Accepted publickey for zuul from 192.168.122.30 port 54226 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:18:52 compute-0 systemd-logind[793]: New session 45 of user zuul.
Feb 02 11:18:52 compute-0 systemd[1]: Started Session 45 of User zuul.
Feb 02 11:18:52 compute-0 sshd-session[132358]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:18:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:52 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:52 compute-0 ceph-mon[74676]: pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:18:53 compute-0 sudo[132512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdvomsqkdkfcxpewesktgitdwdbvoecg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031132.5993738-18-43562101072415/AnsiballZ_tempfile.py'
Feb 02 11:18:53 compute-0 sudo[132512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:53 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e80091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:53 compute-0 python3.9[132514]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 02 11:18:53 compute-0 sudo[132512]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:53 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:53.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:53 compute-0 sudo[132665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqmaetdqzsfrduhvdurphmzxocnrcjow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031133.3665388-54-172218341349573/AnsiballZ_stat.py'
Feb 02 11:18:53 compute-0 sudo[132665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:53.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:53 compute-0 python3.9[132667]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:18:53 compute-0 sudo[132665]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:54 compute-0 sudo[132819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpzykbjosueqhbhjaobojnetbdvcevac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031134.1432412-78-185800848306162/AnsiballZ_slurp.py'
Feb 02 11:18:54 compute-0 sudo[132819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:54 compute-0 python3.9[132821]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Feb 02 11:18:54 compute-0 sudo[132819]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:54 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:54 compute-0 ceph-mon[74676]: pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:55 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:55 compute-0 sudo[132972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbaubcwbezslvuzirdcevmcxeuezkpeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031134.8861647-102-158674707897872/AnsiballZ_stat.py'
Feb 02 11:18:55 compute-0 sudo[132972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:55 compute-0 python3.9[132974]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.w1ck3qlf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:18:55 compute-0 sudo[132972]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:55 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:18:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:55.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:55 compute-0 sudo[133098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkvcsebslkqljphqugxvloeebjfrhlaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031134.8861647-102-158674707897872/AnsiballZ_copy.py'
Feb 02 11:18:55 compute-0 sudo[133098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:55.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:55 compute-0 python3.9[133100]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.w1ck3qlf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031134.8861647-102-158674707897872/.source.w1ck3qlf _original_basename=.r5ldf8yi follow=False checksum=d3d363f50914146e531a8960e7242910c38fd524 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:18:55 compute-0 sudo[133098]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111856 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:18:56 compute-0 sudo[133250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfxjgbjsbwuzayjlmwbdddpcvunbkoer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031136.1203358-147-83996019764791/AnsiballZ_setup.py'
Feb 02 11:18:56 compute-0 sudo[133250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:56 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:56 compute-0 ceph-mon[74676]: pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:18:56 compute-0 python3.9[133252]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:18:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:18:56.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:18:56 compute-0 sudo[133250]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:56] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Feb 02 11:18:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:18:56] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Feb 02 11:18:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:57 compute-0 sudo[133404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcssfbedrfzenuikodzimpugaixkdquo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031137.168323-172-269768236376187/AnsiballZ_blockinfile.py'
Feb 02 11:18:57 compute-0 sudo[133404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:57.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:57 compute-0 python3.9[133406]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDk+5ipijEJfZ5WmD+HXTsUWj5YiWN8pWe35AO8ELR/LGUVm0CSFF6BVvCEFopx45Tw1wFTKqE8urw/0eWyAJVJxKtGQWkQkJI/2BRPiC4MsjGSd8kDRIhYcX7vGxghEmjhDOdybW3OquAwYpuBFzJ4YXEvem/0hkeOiLJtaWqivo4dDorFRJ5YlUnsmjnpaUVj10wq2ZJCHVNlBv99T5o5iJ36BE4CgrXRBltlXCrEGsC9R58R1VGtPS4RCuEqXsR8ufyuF6mSllD3AZVbZpOlOqfe2tffpgu0CxGfcAatoL7tmDZdvoIWM5efoyDeHPdnQ6c6MRbnC4tPyUmnIQYMJVedoVuJX66kbyQhjuACgISXtZuIOVxTnacvqvfMxfaMtO2sduK7RIOyGnT2RKuKgob04y5yckh41J6M5ETTAQFdoZ9JF743PQaWzEqLPuHAZy0hBOTm0nb0AdqF3DbVhxmbJNxIccZZzzoiJ0NIRUJgJqFUbQ3dsUFUHx5+vs0=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+4ZGWp7pPunMx8hDCys1UmkIeHd7wh2zOj2YREaMmY
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHc0xh0oJhavn08bZVWDGvCk3xdQkfkZDfy91YLoiNcNbbXWnr/ZZCe5hG6OcxwK0MPa/K3qeCkvK8+EkrjtpzM=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4oxY9DkZlJW8Zd3S8BqpjvSTZxL/s2VHQtjjR8/n6BHRm4a5xiahUvtPYeRle1OxSKi09NXmOFJmYekmkGepH+psi6i013bF87hachNQjoZ+mxLE6CZfu9g8RNGvzfl7GX74RaQPonoopoQw5K32DztfG0ggbjfQeIXoJyFreB4vVH912pFFtmxf/7OeW+Ghxhr6TuNGAOK4yfn7etQeQrudI1RrDq9XDJWokIIdRU7dUX/5u/LhcIrzBS7jcs1MxvkHxpoGnuDy4hsYsQxzOvtf7aDaJmR1Cf4SACCc5jsTb9yhVDUoBbB8+cbZyEK3ptZnI5rmPpRjaPa7g3DZdtDqVH1iop+xhrn1wyKlkFK6PoOtAonRvoQRp7TeWPE/g35abKE1T327yP9W26gYNJPlbe4gUlqeTqbJYuyAidt0Rbc6r6nRjO04SrPCQkJARMh3ObZXr7IDMT1hpl3qKivNNPACjukJ3jQlnYZXMJayY1/mYayyVL9pPwwDy+dM=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAx1kLfZEj7ZTAjGrpHaC9R/HCuuz3C6C6WjmU1a1S4z
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIQHaS+ym6+H9N0T11zZHG2jBaPMwaww7ZKtSls7mtu3Q2EZlUO9FG8bMOF46PodL9W0B6Ns5TuUHhqIq0OEY1Q=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCa6RS7lJapFXYhFEOarliHae3kAlRVB3Mpj5d8pLeBFLetAVeazvG4Dlu3MAf9MvsXouwCREHUStzNdGkA3zeRPtMdqi1ElJ9EAGLn/XEYyjk+SNqMZKkJuXapeu3A3gSjmLfpP/WG8DAzPtOMFavpTPjb2s++Cfvacm/+jCuLUZyqP32CsDfPHLh5ah0lVyJzKYHpJoJkUI+YE1rSj48+IoO462hoSS5gjQQtrDDKzOGcwMu0gKwAWovI2M1Zjd+QtMWwg6LOJdLMqmbc/uLtCiG/fM9Fzid6+WlrNL2UuC/QO1KYMt4HY6UgyIkBeRIlHW5PPIL2YKnP0K+spcn64DWIiz8HlYKgImtdGV+9S/oy1UMo9mD909F+rVfe7z4Odiha5/4yr9Wfqaog7405kUrGkmUJ9+m0VCKwp7imgkCh3ZGmVy9TkZb9EnUPQJZsTKBrITyIpagOxZeLQE3BMHVlwTm9Z1Lo1rkpt8NnV7QJmwYiPWu41RVa+hn22M8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILAjnc0Ts1vxT+icYNashoW2iYerlkwmRX530JvKQ+eU
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDrQv4R+sch1O+OfrWYu+Tr+kYmEx3wGbhS8SiseFMhEjkBOjnQ5br37LVwQalmEoRLwBCczpNGk/ZHNpKcJLd0=
                                              create=True mode=0644 path=/tmp/ansible.w1ck3qlf state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:57 compute-0 sudo[133404]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:57.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:58 compute-0 sudo[133556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgjllaxmsumcjbnkxkhnvfqrobounjlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031137.9423347-196-51834336355960/AnsiballZ_command.py'
Feb 02 11:18:58 compute-0 sudo[133556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:58 compute-0 python3.9[133558]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.w1ck3qlf' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:18:58 compute-0 sudo[133556]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:58 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:58 compute-0 ceph-mon[74676]: pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:59 compute-0 sudo[133713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuwmzlgmjqudiusbxqfinhippwbhfzna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031138.6754425-220-83395819723429/AnsiballZ_file.py'
Feb 02 11:18:59 compute-0 sudo[133713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:18:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:59 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:59 compute-0 python3.9[133715]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.w1ck3qlf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:18:59 compute-0 sudo[133713]: pam_unix(sudo:session): session closed for user root
Feb 02 11:18:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:18:59 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:18:59 compute-0 sshd-session[132361]: Connection closed by 192.168.122.30 port 54226
Feb 02 11:18:59 compute-0 sshd-session[132358]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:18:59 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Feb 02 11:18:59 compute-0 systemd[1]: session-45.scope: Consumed 4.412s CPU time.
Feb 02 11:18:59 compute-0 systemd-logind[793]: Session 45 logged out. Waiting for processes to exit.
Feb 02 11:18:59 compute-0 systemd-logind[793]: Removed session 45.
Feb 02 11:18:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:18:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:18:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:18:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:18:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:18:59.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:18:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:18:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:18:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:18:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:18:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:18:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:18:59 compute-0 sshd-session[133559]: Invalid user grafana from 80.94.92.186 port 46686
Feb 02 11:18:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:18:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:18:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:18:59.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:18:59 compute-0 sshd-session[133559]: Connection closed by invalid user grafana 80.94.92.186 port 46686 [preauth]
Feb 02 11:18:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:19:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:00 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:00 compute-0 ceph-mon[74676]: pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:01 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:01 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 02 11:19:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:01 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:19:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:19:01 compute-0 sudo[133745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:19:01 compute-0 sudo[133745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:01.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:01 compute-0 sudo[133745]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:02 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:02 compute-0 ceph-mon[74676]: pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:19:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:03.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:19:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:03.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:19:04 compute-0 sshd-session[133772]: Accepted publickey for zuul from 192.168.122.30 port 45140 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:19:04 compute-0 systemd-logind[793]: New session 46 of user zuul.
Feb 02 11:19:04 compute-0 systemd[1]: Started Session 46 of User zuul.
Feb 02 11:19:04 compute-0 sshd-session[133772]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:19:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:04 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:04 compute-0 ceph-mon[74676]: pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:19:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:05 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:05 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:05 compute-0 python3.9[133926]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:19:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:05 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:19:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:19:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:05.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:19:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:19:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:06 compute-0 sudo[134081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-einunzgdmlxrfafixsikbxtuuoijnlmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031145.8311913-51-27133307932150/AnsiballZ_systemd.py'
Feb 02 11:19:06 compute-0 sudo[134081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:06 compute-0 python3.9[134083]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 02 11:19:06 compute-0 sudo[134081]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:06 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:19:06.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:19:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:19:06.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:19:06 compute-0 ceph-mon[74676]: pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:19:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:06] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Feb 02 11:19:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:06] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Feb 02 11:19:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:07 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:07 compute-0 sudo[134238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqshcppyouybtlzuezuwztjoycgambpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031146.892832-75-15787097238183/AnsiballZ_systemd.py'
Feb 02 11:19:07 compute-0 sudo[134238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:07 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:07 compute-0 python3.9[134240]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:19:07 compute-0 sudo[134238]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:19:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:07.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:07.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:08 compute-0 sudo[134392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkccpxcztjnvomqaeoyvdcvmqwwdjuog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031147.7160509-102-48481583742094/AnsiballZ_command.py'
Feb 02 11:19:08 compute-0 sudo[134392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:08 compute-0 python3.9[134394]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:19:08 compute-0 sudo[134392]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:08 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:19:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:08 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:19:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:08 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:08 compute-0 sudo[134546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eptphgqznugnqwqxrgnsoahwordrusth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031148.5475574-126-77023781646382/AnsiballZ_stat.py'
Feb 02 11:19:08 compute-0 sudo[134546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:09 compute-0 ceph-mon[74676]: pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:19:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:09 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:09 compute-0 python3.9[134548]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:19:09 compute-0 sudo[134546]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:09 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:19:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:09.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:09 compute-0 sudo[134699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqieubiqqraubeiukizcxpupuhiqudiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031149.3332915-153-190323831620827/AnsiballZ_file.py'
Feb 02 11:19:09 compute-0 sudo[134699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:09.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:09 compute-0 python3.9[134701]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:09 compute-0 sudo[134699]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:10 compute-0 sshd-session[133775]: Connection closed by 192.168.122.30 port 45140
Feb 02 11:19:10 compute-0 sshd-session[133772]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:19:10 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Feb 02 11:19:10 compute-0 systemd[1]: session-46.scope: Consumed 3.322s CPU time.
Feb 02 11:19:10 compute-0 systemd-logind[793]: Session 46 logged out. Waiting for processes to exit.
Feb 02 11:19:10 compute-0 systemd-logind[793]: Removed session 46.
Feb 02 11:19:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:10 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:11 compute-0 ceph-mon[74676]: pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:19:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:11 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:11 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:19:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:11 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:19:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:11.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:12 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:13 compute-0 ceph-mon[74676]: pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:19:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:13 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:13 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:19:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:13.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:19:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:19:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:14 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:15 compute-0 ceph-mon[74676]: pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:19:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:19:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:15 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:15 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:19:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:15 compute-0 sshd-session[134732]: Accepted publickey for zuul from 192.168.122.30 port 56654 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:19:15 compute-0 systemd-logind[793]: New session 47 of user zuul.
Feb 02 11:19:15 compute-0 systemd[1]: Started Session 47 of User zuul.
Feb 02 11:19:15 compute-0 sshd-session[134732]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:19:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:15.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:16 compute-0 python3.9[134885]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:19:16 compute-0 sshd-session[71428]: Received disconnect from 38.102.83.234 port 33630:11: disconnected by user
Feb 02 11:19:16 compute-0 sshd-session[71428]: Disconnected from user zuul 38.102.83.234 port 33630
Feb 02 11:19:16 compute-0 sshd-session[71425]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:19:16 compute-0 systemd-logind[793]: Session 18 logged out. Waiting for processes to exit.
Feb 02 11:19:16 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Feb 02 11:19:16 compute-0 systemd[1]: session-18.scope: Consumed 1min 38.100s CPU time.
Feb 02 11:19:16 compute-0 systemd-logind[793]: Removed session 18.
Feb 02 11:19:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:16 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:19:16.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:19:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:16] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:19:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:16] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:19:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:17 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:17 compute-0 ceph-mon[74676]: pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:19:17 compute-0 sudo[135040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewzpsiojkpilinvmbvjumhqaejptsrlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031157.14693-57-183192708125834/AnsiballZ_setup.py'
Feb 02 11:19:17 compute-0 sudo[135040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:17 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Feb 02 11:19:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:17.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:17 compute-0 python3.9[135042]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:19:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:17.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:18 compute-0 sudo[135040]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/111918 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:19:18 compute-0 sudo[135125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzxmrvijdysdobnntpvyauxayzuxkgqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031157.14693-57-183192708125834/AnsiballZ_dnf.py'
Feb 02 11:19:18 compute-0 sudo[135125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:18 compute-0 python3.9[135127]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 02 11:19:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:18 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:19 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0002a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:19 compute-0 ceph-mon[74676]: pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Feb 02 11:19:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:19 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Feb 02 11:19:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:19.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:19.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:20 compute-0 sudo[135125]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:20 compute-0 python3.9[135280]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:19:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:20 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:20 compute-0 sudo[135283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:19:20 compute-0 sudo[135283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:20 compute-0 sudo[135283]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:20 compute-0 sudo[135308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:19:20 compute-0 sudo[135308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:21 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:21 compute-0 ceph-mon[74676]: pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Feb 02 11:19:21 compute-0 sudo[135308]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:19:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:19:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:19:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:19:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:19:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:19:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:19:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:19:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:19:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:19:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:19:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:19:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:19:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:19:21 compute-0 sudo[135410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:19:21 compute-0 sudo[135410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:21 compute-0 sudo[135410]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:21 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0002a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:21 compute-0 sudo[135465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:19:21 compute-0 sudo[135465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Feb 02 11:19:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:21.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:21.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:21 compute-0 sudo[135603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:19:21 compute-0 sudo[135603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:21 compute-0 sudo[135603]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:21 compute-0 podman[135611]: 2026-02-02 11:19:21.929261122 +0000 UTC m=+0.043976006 container create a6dea00026ee1114465b7134c8bfc50b3410496fd2326c32dcf39ba080e5cf2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:19:21 compute-0 systemd[1]: Started libpod-conmon-a6dea00026ee1114465b7134c8bfc50b3410496fd2326c32dcf39ba080e5cf2b.scope.
Feb 02 11:19:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:19:21 compute-0 podman[135611]: 2026-02-02 11:19:21.989050884 +0000 UTC m=+0.103765798 container init a6dea00026ee1114465b7134c8bfc50b3410496fd2326c32dcf39ba080e5cf2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hodgkin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:19:21 compute-0 podman[135611]: 2026-02-02 11:19:21.995036699 +0000 UTC m=+0.109751583 container start a6dea00026ee1114465b7134c8bfc50b3410496fd2326c32dcf39ba080e5cf2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:19:21 compute-0 optimistic_hodgkin[135644]: 167 167
Feb 02 11:19:21 compute-0 systemd[1]: libpod-a6dea00026ee1114465b7134c8bfc50b3410496fd2326c32dcf39ba080e5cf2b.scope: Deactivated successfully.
Feb 02 11:19:22 compute-0 conmon[135644]: conmon a6dea00026ee1114465b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6dea00026ee1114465b7134c8bfc50b3410496fd2326c32dcf39ba080e5cf2b.scope/container/memory.events
Feb 02 11:19:22 compute-0 podman[135611]: 2026-02-02 11:19:22.000300071 +0000 UTC m=+0.115014975 container attach a6dea00026ee1114465b7134c8bfc50b3410496fd2326c32dcf39ba080e5cf2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hodgkin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:19:22 compute-0 podman[135611]: 2026-02-02 11:19:22.001326973 +0000 UTC m=+0.116041857 container died a6dea00026ee1114465b7134c8bfc50b3410496fd2326c32dcf39ba080e5cf2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb 02 11:19:22 compute-0 podman[135611]: 2026-02-02 11:19:21.912088683 +0000 UTC m=+0.026803587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:19:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b51d62efa94a3b73194022417cdc5fc35f742be3a6123aa5281a3a20dd300d5d-merged.mount: Deactivated successfully.
Feb 02 11:19:22 compute-0 podman[135611]: 2026-02-02 11:19:22.04601198 +0000 UTC m=+0.160726864 container remove a6dea00026ee1114465b7134c8bfc50b3410496fd2326c32dcf39ba080e5cf2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:19:22 compute-0 systemd[1]: libpod-conmon-a6dea00026ee1114465b7134c8bfc50b3410496fd2326c32dcf39ba080e5cf2b.scope: Deactivated successfully.
Feb 02 11:19:22 compute-0 python3.9[135599]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 11:19:22 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:19:22 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:19:22 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:19:22 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:19:22 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:19:22 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:19:22 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:19:22 compute-0 podman[135691]: 2026-02-02 11:19:22.169607158 +0000 UTC m=+0.038577140 container create ca3ecb5c3f55e3d315306579cfe7b46c6f5f0bfcb48ccd53f3eb72d7e17fee6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:19:22 compute-0 systemd[1]: Started libpod-conmon-ca3ecb5c3f55e3d315306579cfe7b46c6f5f0bfcb48ccd53f3eb72d7e17fee6a.scope.
Feb 02 11:19:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa73939361c3d354a7e8f7d3b817d75958fe389c675246d7d6782e6b27726a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa73939361c3d354a7e8f7d3b817d75958fe389c675246d7d6782e6b27726a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa73939361c3d354a7e8f7d3b817d75958fe389c675246d7d6782e6b27726a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa73939361c3d354a7e8f7d3b817d75958fe389c675246d7d6782e6b27726a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa73939361c3d354a7e8f7d3b817d75958fe389c675246d7d6782e6b27726a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:22 compute-0 podman[135691]: 2026-02-02 11:19:22.154192473 +0000 UTC m=+0.023162475 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:19:22 compute-0 podman[135691]: 2026-02-02 11:19:22.2553553 +0000 UTC m=+0.124325312 container init ca3ecb5c3f55e3d315306579cfe7b46c6f5f0bfcb48ccd53f3eb72d7e17fee6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:19:22 compute-0 podman[135691]: 2026-02-02 11:19:22.263410819 +0000 UTC m=+0.132380801 container start ca3ecb5c3f55e3d315306579cfe7b46c6f5f0bfcb48ccd53f3eb72d7e17fee6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_golick, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:19:22 compute-0 podman[135691]: 2026-02-02 11:19:22.266937327 +0000 UTC m=+0.135907339 container attach ca3ecb5c3f55e3d315306579cfe7b46c6f5f0bfcb48ccd53f3eb72d7e17fee6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_golick, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:19:22 compute-0 practical_golick[135735]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:19:22 compute-0 practical_golick[135735]: --> All data devices are unavailable
Feb 02 11:19:22 compute-0 systemd[1]: libpod-ca3ecb5c3f55e3d315306579cfe7b46c6f5f0bfcb48ccd53f3eb72d7e17fee6a.scope: Deactivated successfully.
Feb 02 11:19:22 compute-0 podman[135691]: 2026-02-02 11:19:22.604611493 +0000 UTC m=+0.473581475 container died ca3ecb5c3f55e3d315306579cfe7b46c6f5f0bfcb48ccd53f3eb72d7e17fee6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 11:19:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fa73939361c3d354a7e8f7d3b817d75958fe389c675246d7d6782e6b27726a3-merged.mount: Deactivated successfully.
Feb 02 11:19:22 compute-0 podman[135691]: 2026-02-02 11:19:22.652461537 +0000 UTC m=+0.521431519 container remove ca3ecb5c3f55e3d315306579cfe7b46c6f5f0bfcb48ccd53f3eb72d7e17fee6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:19:22 compute-0 systemd[1]: libpod-conmon-ca3ecb5c3f55e3d315306579cfe7b46c6f5f0bfcb48ccd53f3eb72d7e17fee6a.scope: Deactivated successfully.
Feb 02 11:19:22 compute-0 sudo[135465]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:22 compute-0 sudo[135862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:19:22 compute-0 sudo[135862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:22 compute-0 sudo[135862]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:22 compute-0 python3.9[135851]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:19:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:22 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:22 compute-0 sudo[135887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:19:22 compute-0 sudo[135887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:23 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:23 compute-0 podman[136083]: 2026-02-02 11:19:23.153196437 +0000 UTC m=+0.043098859 container create 9f8f5d974343aebd76b0f97c81d667348377e4cd8d64fddb8d6f3bcb2ebcc161 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb 02 11:19:23 compute-0 ceph-mon[74676]: pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Feb 02 11:19:23 compute-0 systemd[1]: Started libpod-conmon-9f8f5d974343aebd76b0f97c81d667348377e4cd8d64fddb8d6f3bcb2ebcc161.scope.
Feb 02 11:19:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:19:23 compute-0 podman[136083]: 2026-02-02 11:19:23.133306424 +0000 UTC m=+0.023208856 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:19:23 compute-0 podman[136083]: 2026-02-02 11:19:23.235221915 +0000 UTC m=+0.125124367 container init 9f8f5d974343aebd76b0f97c81d667348377e4cd8d64fddb8d6f3bcb2ebcc161 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:19:23 compute-0 podman[136083]: 2026-02-02 11:19:23.242647923 +0000 UTC m=+0.132550355 container start 9f8f5d974343aebd76b0f97c81d667348377e4cd8d64fddb8d6f3bcb2ebcc161 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_haslett, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 11:19:23 compute-0 podman[136083]: 2026-02-02 11:19:23.246469851 +0000 UTC m=+0.136372303 container attach 9f8f5d974343aebd76b0f97c81d667348377e4cd8d64fddb8d6f3bcb2ebcc161 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_haslett, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:19:23 compute-0 brave_haslett[136119]: 167 167
Feb 02 11:19:23 compute-0 systemd[1]: libpod-9f8f5d974343aebd76b0f97c81d667348377e4cd8d64fddb8d6f3bcb2ebcc161.scope: Deactivated successfully.
Feb 02 11:19:23 compute-0 podman[136083]: 2026-02-02 11:19:23.248079291 +0000 UTC m=+0.137981743 container died 9f8f5d974343aebd76b0f97c81d667348377e4cd8d64fddb8d6f3bcb2ebcc161 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_haslett, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:19:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ada7919bfd3d7857d540d9a8d52e70aef3da2ba830d84b79adfdcf03b767adc0-merged.mount: Deactivated successfully.
Feb 02 11:19:23 compute-0 podman[136083]: 2026-02-02 11:19:23.282839322 +0000 UTC m=+0.172741764 container remove 9f8f5d974343aebd76b0f97c81d667348377e4cd8d64fddb8d6f3bcb2ebcc161 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:19:23 compute-0 systemd[1]: libpod-conmon-9f8f5d974343aebd76b0f97c81d667348377e4cd8d64fddb8d6f3bcb2ebcc161.scope: Deactivated successfully.
Feb 02 11:19:23 compute-0 python3.9[136112]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:19:23 compute-0 podman[136146]: 2026-02-02 11:19:23.411488126 +0000 UTC m=+0.035391161 container create 9c11e587de7572b694c0eb45ac20306512d7b95a2ac3443546470e3091c66f84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:19:23 compute-0 systemd[1]: Started libpod-conmon-9c11e587de7572b694c0eb45ac20306512d7b95a2ac3443546470e3091c66f84.scope.
Feb 02 11:19:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a221e458b555af34d538c9229b28595b91726591c97d282cde839aee046aab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a221e458b555af34d538c9229b28595b91726591c97d282cde839aee046aab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a221e458b555af34d538c9229b28595b91726591c97d282cde839aee046aab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a221e458b555af34d538c9229b28595b91726591c97d282cde839aee046aab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:23 compute-0 podman[136146]: 2026-02-02 11:19:23.49178018 +0000 UTC m=+0.115683235 container init 9c11e587de7572b694c0eb45ac20306512d7b95a2ac3443546470e3091c66f84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:19:23 compute-0 podman[136146]: 2026-02-02 11:19:23.395933677 +0000 UTC m=+0.019836732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:19:23 compute-0 podman[136146]: 2026-02-02 11:19:23.496227527 +0000 UTC m=+0.120130562 container start 9c11e587de7572b694c0eb45ac20306512d7b95a2ac3443546470e3091c66f84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_goldberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:19:23 compute-0 podman[136146]: 2026-02-02 11:19:23.499358504 +0000 UTC m=+0.123261559 container attach 9c11e587de7572b694c0eb45ac20306512d7b95a2ac3443546470e3091c66f84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:19:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:23 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:19:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:23.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]: {
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:     "1": [
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:         {
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "devices": [
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "/dev/loop3"
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             ],
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "lv_name": "ceph_lv0",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "lv_size": "21470642176",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "name": "ceph_lv0",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "tags": {
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.cluster_name": "ceph",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.crush_device_class": "",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.encrypted": "0",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.osd_id": "1",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.type": "block",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.vdo": "0",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:                 "ceph.with_tpm": "0"
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             },
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "type": "block",
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:             "vg_name": "ceph_vg0"
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:         }
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]:     ]
Feb 02 11:19:23 compute-0 quizzical_goldberg[136184]: }
Feb 02 11:19:23 compute-0 systemd[1]: libpod-9c11e587de7572b694c0eb45ac20306512d7b95a2ac3443546470e3091c66f84.scope: Deactivated successfully.
Feb 02 11:19:23 compute-0 podman[136146]: 2026-02-02 11:19:23.791577958 +0000 UTC m=+0.415481003 container died 9c11e587de7572b694c0eb45ac20306512d7b95a2ac3443546470e3091c66f84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_goldberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:19:23 compute-0 sshd-session[134735]: Connection closed by 192.168.122.30 port 56654
Feb 02 11:19:23 compute-0 sshd-session[134732]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:19:23 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Feb 02 11:19:23 compute-0 systemd[1]: session-47.scope: Consumed 5.548s CPU time.
Feb 02 11:19:23 compute-0 systemd-logind[793]: Session 47 logged out. Waiting for processes to exit.
Feb 02 11:19:23 compute-0 systemd-logind[793]: Removed session 47.
Feb 02 11:19:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8a221e458b555af34d538c9229b28595b91726591c97d282cde839aee046aab-merged.mount: Deactivated successfully.
Feb 02 11:19:23 compute-0 podman[136146]: 2026-02-02 11:19:23.851963679 +0000 UTC m=+0.475866714 container remove 9c11e587de7572b694c0eb45ac20306512d7b95a2ac3443546470e3091c66f84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:19:23 compute-0 systemd[1]: libpod-conmon-9c11e587de7572b694c0eb45ac20306512d7b95a2ac3443546470e3091c66f84.scope: Deactivated successfully.
Feb 02 11:19:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:23.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:23 compute-0 sudo[135887]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:23 compute-0 sudo[136208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:19:23 compute-0 sudo[136208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:23 compute-0 sudo[136208]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:23 compute-0 sudo[136233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:19:23 compute-0 sudo[136233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:24 compute-0 ceph-mon[74676]: pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:19:24 compute-0 podman[136298]: 2026-02-02 11:19:24.34591596 +0000 UTC m=+0.051515028 container create 706a3c2663e814363c60a96a8cfe0d6c30006e6fa4ac43c84e055adec9c11e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:19:24 compute-0 systemd[1]: Started libpod-conmon-706a3c2663e814363c60a96a8cfe0d6c30006e6fa4ac43c84e055adec9c11e46.scope.
Feb 02 11:19:24 compute-0 podman[136298]: 2026-02-02 11:19:24.315895575 +0000 UTC m=+0.021494663 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:19:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:19:24 compute-0 podman[136298]: 2026-02-02 11:19:24.447019165 +0000 UTC m=+0.152618253 container init 706a3c2663e814363c60a96a8cfe0d6c30006e6fa4ac43c84e055adec9c11e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bartik, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:19:24 compute-0 podman[136298]: 2026-02-02 11:19:24.454009531 +0000 UTC m=+0.159608589 container start 706a3c2663e814363c60a96a8cfe0d6c30006e6fa4ac43c84e055adec9c11e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 11:19:24 compute-0 boring_bartik[136315]: 167 167
Feb 02 11:19:24 compute-0 systemd[1]: libpod-706a3c2663e814363c60a96a8cfe0d6c30006e6fa4ac43c84e055adec9c11e46.scope: Deactivated successfully.
Feb 02 11:19:24 compute-0 podman[136298]: 2026-02-02 11:19:24.464462873 +0000 UTC m=+0.170061951 container attach 706a3c2663e814363c60a96a8cfe0d6c30006e6fa4ac43c84e055adec9c11e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bartik, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb 02 11:19:24 compute-0 podman[136298]: 2026-02-02 11:19:24.465261808 +0000 UTC m=+0.170860866 container died 706a3c2663e814363c60a96a8cfe0d6c30006e6fa4ac43c84e055adec9c11e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:19:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4088dd0aa79912ffa2e27beff79a99ba616e59bed03805e8d925a87a990cb539-merged.mount: Deactivated successfully.
Feb 02 11:19:24 compute-0 podman[136298]: 2026-02-02 11:19:24.510881562 +0000 UTC m=+0.216480620 container remove 706a3c2663e814363c60a96a8cfe0d6c30006e6fa4ac43c84e055adec9c11e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:19:24 compute-0 systemd[1]: libpod-conmon-706a3c2663e814363c60a96a8cfe0d6c30006e6fa4ac43c84e055adec9c11e46.scope: Deactivated successfully.
Feb 02 11:19:24 compute-0 podman[136340]: 2026-02-02 11:19:24.645237693 +0000 UTC m=+0.058237126 container create a283e212b305aed19144255db38b1faefaaf473e684502b08bea66f12a6067e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:19:24 compute-0 systemd[1]: Started libpod-conmon-a283e212b305aed19144255db38b1faefaaf473e684502b08bea66f12a6067e1.scope.
Feb 02 11:19:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9032694ed48acceb1d3166cba6c77961ad8028478269ca36ccc5abcd971fb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9032694ed48acceb1d3166cba6c77961ad8028478269ca36ccc5abcd971fb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9032694ed48acceb1d3166cba6c77961ad8028478269ca36ccc5abcd971fb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9032694ed48acceb1d3166cba6c77961ad8028478269ca36ccc5abcd971fb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:19:24 compute-0 podman[136340]: 2026-02-02 11:19:24.611837613 +0000 UTC m=+0.024837096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:19:24 compute-0 podman[136340]: 2026-02-02 11:19:24.714046383 +0000 UTC m=+0.127045836 container init a283e212b305aed19144255db38b1faefaaf473e684502b08bea66f12a6067e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:19:24 compute-0 podman[136340]: 2026-02-02 11:19:24.719236683 +0000 UTC m=+0.132236116 container start a283e212b305aed19144255db38b1faefaaf473e684502b08bea66f12a6067e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:19:24 compute-0 podman[136340]: 2026-02-02 11:19:24.738135365 +0000 UTC m=+0.151134798 container attach a283e212b305aed19144255db38b1faefaaf473e684502b08bea66f12a6067e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mendeleev, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:19:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:24 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0002a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:25 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:25 compute-0 lvm[136431]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:19:25 compute-0 lvm[136431]: VG ceph_vg0 finished
Feb 02 11:19:25 compute-0 busy_mendeleev[136356]: {}
Feb 02 11:19:25 compute-0 systemd[1]: libpod-a283e212b305aed19144255db38b1faefaaf473e684502b08bea66f12a6067e1.scope: Deactivated successfully.
Feb 02 11:19:25 compute-0 podman[136340]: 2026-02-02 11:19:25.466547551 +0000 UTC m=+0.879546984 container died a283e212b305aed19144255db38b1faefaaf473e684502b08bea66f12a6067e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mendeleev, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:19:25 compute-0 systemd[1]: libpod-a283e212b305aed19144255db38b1faefaaf473e684502b08bea66f12a6067e1.scope: Consumed 1.008s CPU time.
Feb 02 11:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb9032694ed48acceb1d3166cba6c77961ad8028478269ca36ccc5abcd971fb0-merged.mount: Deactivated successfully.
Feb 02 11:19:25 compute-0 podman[136340]: 2026-02-02 11:19:25.520364719 +0000 UTC m=+0.933364152 container remove a283e212b305aed19144255db38b1faefaaf473e684502b08bea66f12a6067e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb 02 11:19:25 compute-0 systemd[1]: libpod-conmon-a283e212b305aed19144255db38b1faefaaf473e684502b08bea66f12a6067e1.scope: Deactivated successfully.
Feb 02 11:19:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:25 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:25 compute-0 sudo[136233]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:19:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:19:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:19:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:19:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:19:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:25.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:25 compute-0 sudo[136444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:19:25 compute-0 sudo[136444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:25 compute-0 sudo[136444]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:25.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:26 compute-0 ceph-mon[74676]: pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:19:26 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:19:26 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:19:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:26 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:19:26.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:19:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:26] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:19:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:26] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:19:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:27 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0002a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:27 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d40040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:27.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:27.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:28 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:28 compute-0 ceph-mon[74676]: pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:29 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:29 compute-0 sshd-session[136472]: Accepted publickey for zuul from 192.168.122.30 port 54574 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:19:29 compute-0 systemd-logind[793]: New session 48 of user zuul.
Feb 02 11:19:29 compute-0 systemd[1]: Started Session 48 of User zuul.
Feb 02 11:19:29 compute-0 sshd-session[136472]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:19:29
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.nfs', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.control', 'vms', 'images']
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:19:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:29 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:19:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:19:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:19:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:29.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:19:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:19:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:19:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:29.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:19:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:19:30 compute-0 python3.9[136626]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:19:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:30 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:30 compute-0 ceph-mon[74676]: pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:31 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:31 compute-0 sudo[136782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztnnnabexcrriwcvnedzdomsckolmssr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031171.1710312-105-142166262113949/AnsiballZ_file.py'
Feb 02 11:19:31 compute-0 sudo[136782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:31 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:31.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:31 compute-0 python3.9[136784]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:31 compute-0 sudo[136782]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:31.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:32 compute-0 sudo[136934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmypbsocdrxqwqnufncplszuxfkeiody ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031171.8803284-105-178576599408026/AnsiballZ_file.py'
Feb 02 11:19:32 compute-0 sudo[136934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:32 compute-0 python3.9[136936]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:32 compute-0 sudo[136934]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:32 compute-0 sudo[137086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qebuylfwfifettruvangnpxuilakcctt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031172.4561846-153-93164383656190/AnsiballZ_stat.py'
Feb 02 11:19:32 compute-0 sudo[137086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:32 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:32 compute-0 python3.9[137088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:32 compute-0 sudo[137086]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:33 compute-0 ceph-mon[74676]: pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:33 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d40040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:33 compute-0 sudo[137210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnsiifshkzjosgpqdusivpdcpghwjauo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031172.4561846-153-93164383656190/AnsiballZ_copy.py'
Feb 02 11:19:33 compute-0 sudo[137210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:33 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:33 compute-0 python3.9[137212]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031172.4561846-153-93164383656190/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=72bd09b540ad0f666a279ceeeccf35d0bb0b321b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:33 compute-0 sudo[137210]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:33.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:33.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:33 compute-0 sudo[137363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeikqqowrheipsfntqujscnjeetovpho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031173.715561-153-31955537212419/AnsiballZ_stat.py'
Feb 02 11:19:33 compute-0 sudo[137363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:34 compute-0 python3.9[137365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:34 compute-0 sudo[137363]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:34 compute-0 sudo[137486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsdvflblejljwhyjghpymbiiebkdzueu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031173.715561-153-31955537212419/AnsiballZ_copy.py'
Feb 02 11:19:34 compute-0 sudo[137486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:34 compute-0 python3.9[137488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031173.715561-153-31955537212419/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e9f230bf15cd95a83bcfaa32a289ee3aaf972cc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:34 compute-0 sudo[137486]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:34 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:34 compute-0 sudo[137639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baaiyddjzyfawmrwoxcimkvwpjzjweba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031174.758131-153-118448407568652/AnsiballZ_stat.py'
Feb 02 11:19:34 compute-0 sudo[137639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:35 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:35 compute-0 ceph-mon[74676]: pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:35 compute-0 python3.9[137641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:35 compute-0 sudo[137639]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:35 compute-0 sudo[137762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izdfgsonsttlkrchfpudmhlyauevxvjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031174.758131-153-118448407568652/AnsiballZ_copy.py'
Feb 02 11:19:35 compute-0 sudo[137762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:35 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004100 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:35 compute-0 python3.9[137764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031174.758131-153-118448407568652/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c1f4929be9cef6aa0b9f31d65a9e6948ea4f0cc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:19:35 compute-0 sudo[137762]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:35.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:35.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:36 compute-0 sudo[137915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzghwclqahcnvndimlagibppurslobrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031175.798808-283-96043985314611/AnsiballZ_file.py'
Feb 02 11:19:36 compute-0 sudo[137915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:36 compute-0 python3.9[137917]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:36 compute-0 sudo[137915]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:36 compute-0 sudo[138067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdwxtnckgnoimberjbrlsbyumytdnttc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031176.3336349-283-41583470590168/AnsiballZ_file.py'
Feb 02 11:19:36 compute-0 sudo[138067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:36 compute-0 python3.9[138069]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:36 compute-0 sudo[138067]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:36 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:19:36.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:19:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:36] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Feb 02 11:19:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:36] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Feb 02 11:19:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:37 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:37 compute-0 ceph-mon[74676]: pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:19:37 compute-0 sudo[138220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twjblkrvehfmgxzjzcxfehpvrngjqvro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031176.9801354-329-124917738272804/AnsiballZ_stat.py'
Feb 02 11:19:37 compute-0 sudo[138220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:37 compute-0 python3.9[138222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:37 compute-0 sudo[138220]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:37 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:37.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:37 compute-0 sudo[138344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpxzmsuciswnhynynxqvqqsfysruxgtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031176.9801354-329-124917738272804/AnsiballZ_copy.py'
Feb 02 11:19:37 compute-0 sudo[138344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:37 compute-0 python3.9[138346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031176.9801354-329-124917738272804/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8444c41fd7bf84df11e6a2cf5fc067b38595f2fb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:37.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:37 compute-0 sudo[138344]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:38 compute-0 sudo[138496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlpqpimkdxwitdkrekznxvrgdofhrsic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031178.0268958-329-145309727414856/AnsiballZ_stat.py'
Feb 02 11:19:38 compute-0 sudo[138496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:38 compute-0 python3.9[138498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:38 compute-0 sudo[138496]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:38 compute-0 sudo[138619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbdcqtcxxxkpomcawlvrbfsjpmokrmbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031178.0268958-329-145309727414856/AnsiballZ_copy.py'
Feb 02 11:19:38 compute-0 sudo[138619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:38 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:38 compute-0 python3.9[138621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031178.0268958-329-145309727414856/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e66d49a3e1755c6c3ea667e61ae9febbe15b3592 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:38 compute-0 sudo[138619]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:39 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:39 compute-0 ceph-mon[74676]: pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:39 compute-0 sudo[138775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otlgcbxwdrjxsjbsxoecehgornujmcks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031178.997779-329-77508736886823/AnsiballZ_stat.py'
Feb 02 11:19:39 compute-0 sudo[138775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:39 compute-0 python3.9[138777]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:39 compute-0 sudo[138775]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:39 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:39.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:39 compute-0 sudo[138899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzwalsyqetxkayammlcmecluvzpwoul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031178.997779-329-77508736886823/AnsiballZ_copy.py'
Feb 02 11:19:39 compute-0 sudo[138899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:39 compute-0 python3.9[138901]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031178.997779-329-77508736886823/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=771571ac484e7d8d7010f841d15c20bba4f80684 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:39 compute-0 sudo[138899]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:39.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:40 compute-0 sudo[139051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khizifmxvbmzaqgwfzjbgboizxonztur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031180.043172-458-65979380724518/AnsiballZ_file.py'
Feb 02 11:19:40 compute-0 sudo[139051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:40 compute-0 python3.9[139053]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:40 compute-0 sudo[139051]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:40 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc0013a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:40.979567) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031180979640, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1811, "num_deletes": 250, "total_data_size": 3586398, "memory_usage": 3636152, "flush_reason": "Manual Compaction"}
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031180997345, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2063635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10840, "largest_seqno": 12650, "table_properties": {"data_size": 2057746, "index_size": 2964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14803, "raw_average_key_size": 20, "raw_value_size": 2044900, "raw_average_value_size": 2770, "num_data_blocks": 132, "num_entries": 738, "num_filter_entries": 738, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030993, "oldest_key_time": 1770030993, "file_creation_time": 1770031180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17854 microseconds, and 5119 cpu microseconds.
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:40.997432) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2063635 bytes OK
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:40.997456) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:40.999010) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:40.999030) EVENT_LOG_v1 {"time_micros": 1770031180999024, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:40.999055) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3579027, prev total WAL file size 3579027, number of live WAL files 2.
Feb 02 11:19:40 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:40.999995) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2015KB)], [26(13MB)]
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031181000080, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16365097, "oldest_snapshot_seqno": -1}
Feb 02 11:19:41 compute-0 sudo[139204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlhpkiwbffuerzeuieouwtymzlrsvwrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031180.8480334-458-216554405726721/AnsiballZ_file.py'
Feb 02 11:19:41 compute-0 sudo[139204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4381 keys, 14383138 bytes, temperature: kUnknown
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031181088705, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14383138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14349877, "index_size": 21183, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 110670, "raw_average_key_size": 25, "raw_value_size": 14266049, "raw_average_value_size": 3256, "num_data_blocks": 907, "num_entries": 4381, "num_filter_entries": 4381, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770031180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:19:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:41 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:41.089160) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14383138 bytes
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:41.094890) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.5 rd, 162.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 13.6 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(14.9) write-amplify(7.0) OK, records in: 4807, records dropped: 426 output_compression: NoCompression
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:41.094919) EVENT_LOG_v1 {"time_micros": 1770031181094907, "job": 10, "event": "compaction_finished", "compaction_time_micros": 88720, "compaction_time_cpu_micros": 22514, "output_level": 6, "num_output_files": 1, "total_output_size": 14383138, "num_input_records": 4807, "num_output_records": 4381, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031181095425, "job": 10, "event": "table_file_deletion", "file_number": 28}
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031181097770, "job": 10, "event": "table_file_deletion", "file_number": 26}
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:40.999853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:41.097826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:41.097831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:41.097857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:41.097859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:19:41 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:19:41.097861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:19:41 compute-0 ceph-mon[74676]: pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:41 compute-0 python3.9[139206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:41 compute-0 sudo[139204]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:41 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:41 compute-0 sudo[139357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teslsebaigtfctumhxrbxvzbfopshbtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031181.436185-510-87862998097599/AnsiballZ_stat.py'
Feb 02 11:19:41 compute-0 sudo[139357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:41.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:41 compute-0 python3.9[139359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:41 compute-0 sudo[139357]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:41.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:41 compute-0 sudo[139384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:19:41 compute-0 sudo[139384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:19:41 compute-0 sudo[139384]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:42 compute-0 sudo[139505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaqvezwflmdstzvaxozvsmxisofvgbkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031181.436185-510-87862998097599/AnsiballZ_copy.py'
Feb 02 11:19:42 compute-0 sudo[139505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:42 compute-0 python3.9[139507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031181.436185-510-87862998097599/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=a89e144cc901f58ce573c282ca4e97ec7ab1ae16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:42 compute-0 sudo[139505]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:42 compute-0 sudo[139657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiekvsohrwdqoofvhubwtyemrkmjaruv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031182.42322-510-91560293171123/AnsiballZ_stat.py'
Feb 02 11:19:42 compute-0 sudo[139657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:42 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8000d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:42 compute-0 python3.9[139659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:42 compute-0 sudo[139657]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:43 compute-0 sudo[139781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oroecifyxgfyvosbpwwjznjawfymtozk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031182.42322-510-91560293171123/AnsiballZ_copy.py'
Feb 02 11:19:43 compute-0 sudo[139781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:43 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8000d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:43 compute-0 ceph-mon[74676]: pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:43 compute-0 python3.9[139783]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031182.42322-510-91560293171123/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e66d49a3e1755c6c3ea667e61ae9febbe15b3592 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:43 compute-0 sudo[139781]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:43 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:43 compute-0 sudo[139934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peyxmvqhkhqnoohugaykxtlgvryqctnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031183.3960238-510-122401758858249/AnsiballZ_stat.py'
Feb 02 11:19:43 compute-0 sudo[139934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:19:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:43.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:19:43 compute-0 python3.9[139936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:43 compute-0 sudo[139934]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:43.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:44 compute-0 ceph-mon[74676]: pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:44 compute-0 sudo[140057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shohaxtjvhctoheiigoapooqxqkambln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031183.3960238-510-122401758858249/AnsiballZ_copy.py'
Feb 02 11:19:44 compute-0 sudo[140057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:44 compute-0 python3.9[140059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031183.3960238-510-122401758858249/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=425c7a8ccdaea636bc86fdf35478c317d9799752 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:44 compute-0 sudo[140057]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:19:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:19:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:44 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:45 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8000d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:19:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:45 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:19:45 compute-0 sudo[140211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqrvkptgcceemzwkpvehqkphuxkiggey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031185.4268005-687-21479464837562/AnsiballZ_file.py'
Feb 02 11:19:45 compute-0 sudo[140211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:45.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:45 compute-0 python3.9[140213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:45 compute-0 sudo[140211]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:45.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:46 compute-0 ceph-mon[74676]: pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:19:46 compute-0 sudo[140363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzvkuvoxjsqbtzbautnvbhjeymzwmnbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031186.0160651-715-125293733029559/AnsiballZ_stat.py'
Feb 02 11:19:46 compute-0 sudo[140363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:46 compute-0 python3.9[140365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:46 compute-0 sudo[140363]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:46 compute-0 sudo[140486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cggmwcpnpxdrwfzvzzdjjntzfxayqiqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031186.0160651-715-125293733029559/AnsiballZ_copy.py'
Feb 02 11:19:46 compute-0 sudo[140486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:46 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:46 compute-0 python3.9[140488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031186.0160651-715-125293733029559/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a812cb1395422830114aed94a0605874e7a92a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:19:46.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:19:46 compute-0 sudo[140486]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:46] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:19:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:46] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:19:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:47 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:47 compute-0 sudo[140639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liyzhbwizyuqyzvunhqtwqredtytpevu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031187.16928-761-134740282463218/AnsiballZ_file.py'
Feb 02 11:19:47 compute-0 sudo[140639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:47 compute-0 python3.9[140641]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:47 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:47 compute-0 sudo[140639]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:47.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:47.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:47 compute-0 sudo[140792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajgkvqveebdlmmrnlcbbkxfjhedagrdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031187.729128-785-181356817212251/AnsiballZ_stat.py'
Feb 02 11:19:47 compute-0 sudo[140792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:48 compute-0 python3.9[140794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:48 compute-0 sudo[140792]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:48 compute-0 sudo[140915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvadaocgwopoxstlmuezrshzvxemcyoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031187.729128-785-181356817212251/AnsiballZ_copy.py'
Feb 02 11:19:48 compute-0 sudo[140915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:48 compute-0 python3.9[140917]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031187.729128-785-181356817212251/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a812cb1395422830114aed94a0605874e7a92a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:48 compute-0 ceph-mon[74676]: pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:48 compute-0 sudo[140915]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:48 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:49 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:49 compute-0 sudo[141068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucwgzoverensmfmxtxacvdpsqgawdetz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031188.9155931-836-278844941967259/AnsiballZ_file.py'
Feb 02 11:19:49 compute-0 sudo[141068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:49 compute-0 python3.9[141070]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:49 compute-0 sudo[141068]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:49 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:49.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:49 compute-0 sudo[141221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukkabwnfcoismsiyqoqbujyiriawrnzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031189.4858122-863-70415665081921/AnsiballZ_stat.py'
Feb 02 11:19:49 compute-0 sudo[141221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:49.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:49 compute-0 python3.9[141223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:49 compute-0 sudo[141221]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:50 compute-0 sudo[141344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tunqnrzitalwlppjayttacmbeozqujqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031189.4858122-863-70415665081921/AnsiballZ_copy.py'
Feb 02 11:19:50 compute-0 sudo[141344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:50 compute-0 ceph-mon[74676]: pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:50 compute-0 python3.9[141346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031189.4858122-863-70415665081921/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a812cb1395422830114aed94a0605874e7a92a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:50 compute-0 sudo[141344]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:50 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004180 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:51 compute-0 sudo[141497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfckzxzbxdgfiwauigqdpthkqflpujar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031190.9496212-910-279532897623447/AnsiballZ_file.py'
Feb 02 11:19:51 compute-0 sudo[141497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:51 compute-0 python3.9[141499]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:51 compute-0 sudo[141497]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004180 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:51.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:51 compute-0 sudo[141650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpyisxmqwyoovjrrivonwfkasjktnspt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031191.5741131-936-112810277085675/AnsiballZ_stat.py'
Feb 02 11:19:51 compute-0 sudo[141650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:51.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:52 compute-0 python3.9[141652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:52 compute-0 sudo[141650]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:52 compute-0 sudo[141773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lddwrvkcgpbrrmzcnmjvfazcswafeusn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031191.5741131-936-112810277085675/AnsiballZ_copy.py'
Feb 02 11:19:52 compute-0 sudo[141773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:52 compute-0 python3.9[141775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031191.5741131-936-112810277085675/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a812cb1395422830114aed94a0605874e7a92a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:52 compute-0 sudo[141773]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:52 compute-0 ceph-mon[74676]: pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:52 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:52 compute-0 sudo[141926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deobprwgmwtqumtbpyutlgqvxxoitqds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031192.700173-983-262146540236150/AnsiballZ_file.py'
Feb 02 11:19:52 compute-0 sudo[141926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:53 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:53 compute-0 python3.9[141928]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:53 compute-0 sudo[141926]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:53 compute-0 sudo[142079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxicilrzkqwqehrksmprozitcwhflakf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031193.3090746-1010-27624003903272/AnsiballZ_stat.py'
Feb 02 11:19:53 compute-0 sudo[142079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:53 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:53.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:53 compute-0 python3.9[142081]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:53 compute-0 sudo[142079]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:53.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:54 compute-0 sudo[142202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unmgwllzmvnzagigezrjomyjharwlzbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031193.3090746-1010-27624003903272/AnsiballZ_copy.py'
Feb 02 11:19:54 compute-0 sudo[142202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:54 compute-0 python3.9[142204]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031193.3090746-1010-27624003903272/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a812cb1395422830114aed94a0605874e7a92a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:54 compute-0 sudo[142202]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:54 compute-0 sudo[142354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyedendfrqwvpntwgcwmluckycyliabh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031194.487561-1059-36094199175572/AnsiballZ_file.py'
Feb 02 11:19:54 compute-0 sudo[142354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:54 compute-0 ceph-mon[74676]: pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:54 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d40041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:54 compute-0 python3.9[142356]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:19:54 compute-0 sudo[142354]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:55 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:55 compute-0 sudo[142507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvponmewhrxgljihzjkmxrbxrpihuwtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031195.1791415-1082-148002581972045/AnsiballZ_stat.py'
Feb 02 11:19:55 compute-0 sudo[142507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:55 compute-0 python3.9[142509]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:19:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:55 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:55 compute-0 sudo[142507]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:19:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:55.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:55 compute-0 sudo[142631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-budvqfdfotjfcccthcvmpycnegmvuzoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031195.1791415-1082-148002581972045/AnsiballZ_copy.py'
Feb 02 11:19:55 compute-0 sudo[142631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:19:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:55.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:19:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:19:56 compute-0 python3.9[142633]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031195.1791415-1082-148002581972045/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a812cb1395422830114aed94a0605874e7a92a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:19:56 compute-0 sudo[142631]: pam_unix(sudo:session): session closed for user root
Feb 02 11:19:56 compute-0 sshd-session[136475]: Connection closed by 192.168.122.30 port 54574
Feb 02 11:19:56 compute-0 sshd-session[136472]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:19:56 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Feb 02 11:19:56 compute-0 systemd[1]: session-48.scope: Consumed 19.607s CPU time.
Feb 02 11:19:56 compute-0 systemd-logind[793]: Session 48 logged out. Waiting for processes to exit.
Feb 02 11:19:56 compute-0 systemd-logind[793]: Removed session 48.
Feb 02 11:19:56 compute-0 ceph-mon[74676]: pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:19:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:56 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:19:56.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:19:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:19:56.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:19:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:56] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:19:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:19:56] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:19:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d40041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:57.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:57.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:58 compute-0 ceph-mon[74676]: pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:58 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:59 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc003aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:19:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:19:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:19:59 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d40041e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:19:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:19:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:19:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:19:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:19:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:19:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:19:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:19:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:19:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:19:59.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:19:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:19:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:19:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:19:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:19:59.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:20:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : overall HEALTH_OK
Feb 02 11:20:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:20:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2712 writes, 12K keys, 2712 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2712 writes, 2712 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2712 writes, 12K keys, 2712 commit groups, 1.0 writes per commit group, ingest: 24.29 MB, 0.04 MB/s
                                           Interval WAL: 2712 writes, 2712 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    127.4      0.16              0.04         5    0.033       0      0       0.0       0.0
                                             L6      1/0   13.72 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.5    166.9    146.7      0.35              0.09         4    0.088     16K   1783       0.0       0.0
                                            Sum      1/0   13.72 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.5    113.9    140.6      0.51              0.13         9    0.057     16K   1783       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.5    114.7    141.4      0.51              0.13         8    0.064     16K   1783       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    166.9    146.7      0.35              0.09         4    0.088     16K   1783       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    129.9      0.16              0.04         4    0.040       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.020
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594e304b350#2 capacity: 304.00 MB usage: 2.07 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(157,1.90 MB,0.62548%) FilterBlock(10,55.48 KB,0.0178237%) IndexBlock(10,115.48 KB,0.0370979%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 11:20:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:00 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:00 compute-0 ceph-mon[74676]: pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:20:00 compute-0 ceph-mon[74676]: overall HEALTH_OK
Feb 02 11:20:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:01 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:01 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc0043c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:20:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:20:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:01.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:20:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:20:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:01.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:20:02 compute-0 sudo[142664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:20:02 compute-0 sudo[142664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:02 compute-0 sudo[142664]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:02 compute-0 sshd-session[142689]: Accepted publickey for zuul from 192.168.122.30 port 33874 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:20:02 compute-0 systemd-logind[793]: New session 49 of user zuul.
Feb 02 11:20:02 compute-0 systemd[1]: Started Session 49 of User zuul.
Feb 02 11:20:02 compute-0 sshd-session[142689]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:20:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:02 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:02 compute-0 ceph-mon[74676]: pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:20:03 compute-0 sudo[142843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpqijmojompbdqngxaidwgcgmwqkdqpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031202.5853753-21-164618405828292/AnsiballZ_file.py'
Feb 02 11:20:03 compute-0 sudo[142843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:03 compute-0 python3.9[142845]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:03 compute-0 sudo[142843]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:20:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:03 compute-0 sudo[142996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmqhpsjijrpzwqjahojivcsvoozzgdhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031203.4640808-57-12925278096643/AnsiballZ_stat.py'
Feb 02 11:20:03 compute-0 sudo[142996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:03.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:04 compute-0 python3.9[142998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:04 compute-0 sudo[142996]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:04 compute-0 sudo[143119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uheozkdcwavjsmtizitwclngrkihsocm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031203.4640808-57-12925278096643/AnsiballZ_copy.py'
Feb 02 11:20:04 compute-0 sudo[143119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:04 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc0043c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:04 compute-0 python3.9[143121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031203.4640808-57-12925278096643/.source.conf _original_basename=ceph.conf follow=False checksum=6509c53462565d28e4c05da2a8d00510711106b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:04 compute-0 ceph-mon[74676]: pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:20:04 compute-0 sudo[143119]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:05 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:05 compute-0 sudo[143272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyoybecyilyzdsmbcfqykdsydrprcfnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031205.0627515-57-48622237277378/AnsiballZ_stat.py'
Feb 02 11:20:05 compute-0 sudo[143272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:05 compute-0 python3.9[143274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:05 compute-0 sudo[143272]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:05 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:20:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:05.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:05 compute-0 sudo[143396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktquxxgcjocwavdkojeyjurjhwfhnoqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031205.0627515-57-48622237277378/AnsiballZ_copy.py'
Feb 02 11:20:05 compute-0 sudo[143396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:05.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:05 compute-0 python3.9[143398]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031205.0627515-57-48622237277378/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=007d0511578188361071d471ee7ce6e57b71b01c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:06 compute-0 sudo[143396]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:06 compute-0 sshd-session[142692]: Connection closed by 192.168.122.30 port 33874
Feb 02 11:20:06 compute-0 sshd-session[142689]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:20:06 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Feb 02 11:20:06 compute-0 systemd[1]: session-49.scope: Consumed 2.349s CPU time.
Feb 02 11:20:06 compute-0 systemd-logind[793]: Session 49 logged out. Waiting for processes to exit.
Feb 02 11:20:06 compute-0 systemd-logind[793]: Removed session 49.
Feb 02 11:20:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:06 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:20:06.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:20:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:20:06.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:20:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:20:06.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:20:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:06] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Feb 02 11:20:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:06] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Feb 02 11:20:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:07 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc0043c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:07 compute-0 ceph-mon[74676]: pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:20:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:07 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:07.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:20:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:07.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:20:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112008 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:20:08 compute-0 ceph-mon[74676]: pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:08 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76d4004220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:09 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:09 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:09.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:09.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:10 compute-0 ceph-mon[74676]: pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:10 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b8002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:11 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8001810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:11 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc0043c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:20:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:11.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:20:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000031s ======
Feb 02 11:20:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:11.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Feb 02 11:20:12 compute-0 sshd-session[143431]: Accepted publickey for zuul from 192.168.122.30 port 54540 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:20:12 compute-0 systemd-logind[793]: New session 50 of user zuul.
Feb 02 11:20:12 compute-0 systemd[1]: Started Session 50 of User zuul.
Feb 02 11:20:12 compute-0 sshd-session[143431]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:20:12 compute-0 ceph-mon[74676]: pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:12 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:13 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:13 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:13 compute-0 python3.9[143587]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:20:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:13.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:13.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:20:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:20:14 compute-0 sudo[143742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnzwabgmblryccyawzstwerwwasdhfas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031214.202987-57-172229129408390/AnsiballZ_file.py'
Feb 02 11:20:14 compute-0 sudo[143742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:14 compute-0 ceph-mon[74676]: pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.794920) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031214794976, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 524, "num_deletes": 251, "total_data_size": 597928, "memory_usage": 607960, "flush_reason": "Manual Compaction"}
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031214801308, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 591739, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12651, "largest_seqno": 13174, "table_properties": {"data_size": 588935, "index_size": 840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6444, "raw_average_key_size": 18, "raw_value_size": 583348, "raw_average_value_size": 1638, "num_data_blocks": 38, "num_entries": 356, "num_filter_entries": 356, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031181, "oldest_key_time": 1770031181, "file_creation_time": 1770031214, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 6438 microseconds, and 2330 cpu microseconds.
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.801360) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 591739 bytes OK
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.801388) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.807142) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.807176) EVENT_LOG_v1 {"time_micros": 1770031214807169, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.807206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 595005, prev total WAL file size 595005, number of live WAL files 2.
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:20:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:14 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.811116) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(577KB)], [29(13MB)]
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031214811195, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 14974877, "oldest_snapshot_seqno": -1}
Feb 02 11:20:14 compute-0 python3.9[143744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:20:14 compute-0 sudo[143742]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4227 keys, 12032133 bytes, temperature: kUnknown
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031214906045, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12032133, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12001526, "index_size": 18913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108362, "raw_average_key_size": 25, "raw_value_size": 11921971, "raw_average_value_size": 2820, "num_data_blocks": 798, "num_entries": 4227, "num_filter_entries": 4227, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770031214, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.906318) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12032133 bytes
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.907950) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.7 rd, 126.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 13.7 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(45.6) write-amplify(20.3) OK, records in: 4737, records dropped: 510 output_compression: NoCompression
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.907988) EVENT_LOG_v1 {"time_micros": 1770031214907971, "job": 12, "event": "compaction_finished", "compaction_time_micros": 94942, "compaction_time_cpu_micros": 22796, "output_level": 6, "num_output_files": 1, "total_output_size": 12032133, "num_input_records": 4737, "num_output_records": 4227, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031214908209, "job": 12, "event": "table_file_deletion", "file_number": 31}
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031214909758, "job": 12, "event": "table_file_deletion", "file_number": 29}
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.807614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.909907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.909914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.909915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.909917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:20:14 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:20:14.909918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:20:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:15 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:15 compute-0 sudo[143895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlzxbiaobtkczjrxwpfsginkjvoddolo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031214.9748473-57-36870609349595/AnsiballZ_file.py'
Feb 02 11:20:15 compute-0 sudo[143895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:15 compute-0 python3.9[143897]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:20:15 compute-0 sudo[143895]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:20:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:15 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:15.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:15.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:16 compute-0 python3.9[144048]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:20:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:16 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:16 compute-0 ceph-mon[74676]: pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:20:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:20:16.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:20:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:16] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Feb 02 11:20:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:16] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Feb 02 11:20:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:17 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:17 compute-0 sudo[144199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoyqioibrdimweawvebjlhousrnwtoiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031216.7267826-126-46062961690388/AnsiballZ_seboolean.py'
Feb 02 11:20:17 compute-0 sudo[144199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:17 compute-0 python3.9[144201]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 02 11:20:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:17 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:20:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:20:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:17 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:20:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:17.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:20:18 compute-0 sudo[144199]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:18 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:18 compute-0 ceph-mon[74676]: pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:20:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:19 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:19 compute-0 sudo[144357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgdqgqawwyvvwrhqimmxskgpjaqotmts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031219.0878215-156-31520797675572/AnsiballZ_setup.py'
Feb 02 11:20:19 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb 02 11:20:19 compute-0 sudo[144357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:20:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:19 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:19 compute-0 python3.9[144359]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:20:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:19.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:19 compute-0 sudo[144357]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:19.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:20 compute-0 sudo[144442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxxolsncalpqcxlvafmpgdfzngydwjyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031219.0878215-156-31520797675572/AnsiballZ_dnf.py'
Feb 02 11:20:20 compute-0 sudo[144442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:20 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:20:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:20 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:20:20 compute-0 python3.9[144444]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:20:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:20 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:20 compute-0 ceph-mon[74676]: pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:20:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:21 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:20:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:21 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:21.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:21.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:22 compute-0 sudo[144448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:20:22 compute-0 sudo[144448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:22 compute-0 sudo[144448]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:22 compute-0 sudo[144442]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:22 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:22 compute-0 sudo[144623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgrasttliyfdsrfmqtpamuzbnlnukmww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031222.3109965-192-108642050096665/AnsiballZ_systemd.py'
Feb 02 11:20:22 compute-0 sudo[144623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:22 compute-0 ceph-mon[74676]: pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:20:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:23 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:23 compute-0 python3.9[144625]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 11:20:23 compute-0 sudo[144623]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:23 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:20:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:20:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:23 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:23.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:23.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:23 compute-0 sudo[144779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfjnfswnlbmougbiewufxktbpngvsdij ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770031223.4820833-216-214340841558860/AnsiballZ_edpm_nftables_snippet.py'
Feb 02 11:20:23 compute-0 sudo[144779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:24 compute-0 python3[144781]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb 02 11:20:24 compute-0 sudo[144779]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:24 compute-0 sudo[144931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fckyarwpnistervwmtmwpjvmcodiknkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031224.4557545-243-28508148462384/AnsiballZ_file.py'
Feb 02 11:20:24 compute-0 sudo[144931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:24 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:24 compute-0 python3.9[144933]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:24 compute-0 sudo[144931]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:24 compute-0 ceph-mon[74676]: pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:20:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:25 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:25 compute-0 sudo[145084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozeiovpyadaxbrnoordjvsuqoewojcnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031225.104826-267-197169293655383/AnsiballZ_stat.py'
Feb 02 11:20:25 compute-0 sudo[145084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:20:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:25 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:25 compute-0 python3.9[145086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:25 compute-0 sudo[145084]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:25 compute-0 sudo[145114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:20:25 compute-0 sudo[145114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:25 compute-0 sudo[145114]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:25 compute-0 sudo[145162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:20:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:25.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:25 compute-0 sudo[145162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:25 compute-0 sudo[145213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilxjhgucyewrwcyorevuzcrxsdnaqhav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031225.104826-267-197169293655383/AnsiballZ_file.py'
Feb 02 11:20:25 compute-0 sudo[145213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:26 compute-0 python3.9[145215]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:26 compute-0 sudo[145213]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:26 compute-0 sudo[145162]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:26 compute-0 ceph-mon[74676]: pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:20:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:20:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:20:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:20:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:20:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:20:26 compute-0 sudo[145397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iahpkumsognibhbipowzqeibmxyyjwci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031226.4602692-303-212438086028088/AnsiballZ_stat.py'
Feb 02 11:20:26 compute-0 sudo[145397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:20:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:20:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:26 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e8009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:20:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:20:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:20:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:20:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:20:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:20:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:20:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:20:26.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:20:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:20:26.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:20:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:26] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Feb 02 11:20:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:26] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Feb 02 11:20:27 compute-0 sudo[145401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:20:27 compute-0 sudo[145401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:27 compute-0 sudo[145401]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:27 compute-0 python3.9[145399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:27 compute-0 sudo[145426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:20:27 compute-0 sudo[145426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:27 compute-0 sudo[145397]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:27 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:27 compute-0 sudo[145526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqsiqmkrhrsrevubsghodwzwpofinljz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031226.4602692-303-212438086028088/AnsiballZ_file.py'
Feb 02 11:20:27 compute-0 sudo[145526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:27 compute-0 python3.9[145535]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2lq_ks0c recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:27 compute-0 sudo[145526]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:27 compute-0 podman[145569]: 2026-02-02 11:20:27.417993249 +0000 UTC m=+0.021694887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:20:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:20:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:20:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:20:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:20:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:20:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:20:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:20:27 compute-0 podman[145569]: 2026-02-02 11:20:27.60200236 +0000 UTC m=+0.205703968 container create 4da48eabe34bc89293a59608214a18714e4b3826e622cf34816094f24d453da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:20:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:20:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:27 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:27 compute-0 systemd[1]: Started libpod-conmon-4da48eabe34bc89293a59608214a18714e4b3826e622cf34816094f24d453da0.scope.
Feb 02 11:20:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:20:27 compute-0 podman[145569]: 2026-02-02 11:20:27.76448834 +0000 UTC m=+0.368189978 container init 4da48eabe34bc89293a59608214a18714e4b3826e622cf34816094f24d453da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:20:27 compute-0 podman[145569]: 2026-02-02 11:20:27.772632621 +0000 UTC m=+0.376334239 container start 4da48eabe34bc89293a59608214a18714e4b3826e622cf34816094f24d453da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:20:27 compute-0 eager_bohr[145663]: 167 167
Feb 02 11:20:27 compute-0 systemd[1]: libpod-4da48eabe34bc89293a59608214a18714e4b3826e622cf34816094f24d453da0.scope: Deactivated successfully.
Feb 02 11:20:27 compute-0 podman[145569]: 2026-02-02 11:20:27.816876416 +0000 UTC m=+0.420578054 container attach 4da48eabe34bc89293a59608214a18714e4b3826e622cf34816094f24d453da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:20:27 compute-0 podman[145569]: 2026-02-02 11:20:27.817411031 +0000 UTC m=+0.421112679 container died 4da48eabe34bc89293a59608214a18714e4b3826e622cf34816094f24d453da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:20:27 compute-0 sudo[145752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnjgekmsggsblebkgcqtbcemfvvswzoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031227.6405554-339-46663964735093/AnsiballZ_stat.py'
Feb 02 11:20:27 compute-0 sudo[145752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:27.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:28 compute-0 python3.9[145754]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f6b1d440e8a8278c8959c4a5bb395a72de2241a25fa12c69a73c3bad27b73c3-merged.mount: Deactivated successfully.
Feb 02 11:20:28 compute-0 sudo[145752]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:28 compute-0 sudo[145831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzagjmkdwvxtkvhgsdgecoesnqcfswfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031227.6405554-339-46663964735093/AnsiballZ_file.py'
Feb 02 11:20:28 compute-0 sudo[145831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:28 compute-0 podman[145569]: 2026-02-02 11:20:28.411841357 +0000 UTC m=+1.015542975 container remove 4da48eabe34bc89293a59608214a18714e4b3826e622cf34816094f24d453da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:20:28 compute-0 systemd[1]: libpod-conmon-4da48eabe34bc89293a59608214a18714e4b3826e622cf34816094f24d453da0.scope: Deactivated successfully.
Feb 02 11:20:28 compute-0 podman[145841]: 2026-02-02 11:20:28.549673048 +0000 UTC m=+0.049021982 container create 85b7dd9cc5a4fdec399f88295198b9fda62b72a39b3da300d3ced5de2404f6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:20:28 compute-0 systemd[1]: Started libpod-conmon-85b7dd9cc5a4fdec399f88295198b9fda62b72a39b3da300d3ced5de2404f6c9.scope.
Feb 02 11:20:28 compute-0 python3.9[145833]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140c4ceb4166c488a1c8595a88e928f50f11a083eff2ff3799aed66cbff6a761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:28 compute-0 ceph-mon[74676]: pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140c4ceb4166c488a1c8595a88e928f50f11a083eff2ff3799aed66cbff6a761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140c4ceb4166c488a1c8595a88e928f50f11a083eff2ff3799aed66cbff6a761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140c4ceb4166c488a1c8595a88e928f50f11a083eff2ff3799aed66cbff6a761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140c4ceb4166c488a1c8595a88e928f50f11a083eff2ff3799aed66cbff6a761/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:28 compute-0 podman[145841]: 2026-02-02 11:20:28.531711858 +0000 UTC m=+0.031060722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:20:28 compute-0 sudo[145831]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:28 compute-0 podman[145841]: 2026-02-02 11:20:28.636369987 +0000 UTC m=+0.135718861 container init 85b7dd9cc5a4fdec399f88295198b9fda62b72a39b3da300d3ced5de2404f6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:20:28 compute-0 podman[145841]: 2026-02-02 11:20:28.643698785 +0000 UTC m=+0.143047629 container start 85b7dd9cc5a4fdec399f88295198b9fda62b72a39b3da300d3ced5de2404f6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:20:28 compute-0 podman[145841]: 2026-02-02 11:20:28.649231442 +0000 UTC m=+0.148580286 container attach 85b7dd9cc5a4fdec399f88295198b9fda62b72a39b3da300d3ced5de2404f6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:20:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:28 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:28 compute-0 affectionate_elbakyan[145858]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:20:28 compute-0 affectionate_elbakyan[145858]: --> All data devices are unavailable
Feb 02 11:20:28 compute-0 systemd[1]: libpod-85b7dd9cc5a4fdec399f88295198b9fda62b72a39b3da300d3ced5de2404f6c9.scope: Deactivated successfully.
Feb 02 11:20:28 compute-0 podman[145841]: 2026-02-02 11:20:28.941670219 +0000 UTC m=+0.441019063 container died 85b7dd9cc5a4fdec399f88295198b9fda62b72a39b3da300d3ced5de2404f6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb 02 11:20:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-140c4ceb4166c488a1c8595a88e928f50f11a083eff2ff3799aed66cbff6a761-merged.mount: Deactivated successfully.
Feb 02 11:20:28 compute-0 podman[145841]: 2026-02-02 11:20:28.989261609 +0000 UTC m=+0.488610453 container remove 85b7dd9cc5a4fdec399f88295198b9fda62b72a39b3da300d3ced5de2404f6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:20:28 compute-0 systemd[1]: libpod-conmon-85b7dd9cc5a4fdec399f88295198b9fda62b72a39b3da300d3ced5de2404f6c9.scope: Deactivated successfully.
Feb 02 11:20:29 compute-0 sudo[145426]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:29 compute-0 sudo[145910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:20:29 compute-0 sudo[145910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:29 compute-0 sudo[145910]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:29 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:29 compute-0 sudo[145935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:20:29 compute-0 sudo[145935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:20:29
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.data', '.nfs', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta', 'images', 'volumes']
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:20:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:20:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:20:29 compute-0 podman[146053]: 2026-02-02 11:20:29.529813856 +0000 UTC m=+0.020096721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:20:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:29 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:20:29 compute-0 podman[146053]: 2026-02-02 11:20:29.661809561 +0000 UTC m=+0.152092416 container create cbc04c26118cdf95cc3b7db564286ae9419a52a85699f7c8c8c10f6e4c51542c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swanson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:20:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:20:29 compute-0 sudo[146141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beinckxsuyblccqvpzxnlgnvtitmpoew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031229.3109093-378-229252228996652/AnsiballZ_command.py'
Feb 02 11:20:29 compute-0 sudo[146141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:29 compute-0 systemd[1]: Started libpod-conmon-cbc04c26118cdf95cc3b7db564286ae9419a52a85699f7c8c8c10f6e4c51542c.scope.
Feb 02 11:20:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:20:29 compute-0 python3.9[146143]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:20:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:29.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:29 compute-0 sudo[146141]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:20:29 compute-0 podman[146053]: 2026-02-02 11:20:29.99202411 +0000 UTC m=+0.482306975 container init cbc04c26118cdf95cc3b7db564286ae9419a52a85699f7c8c8c10f6e4c51542c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swanson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb 02 11:20:29 compute-0 podman[146053]: 2026-02-02 11:20:29.999350198 +0000 UTC m=+0.489633053 container start cbc04c26118cdf95cc3b7db564286ae9419a52a85699f7c8c8c10f6e4c51542c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:20:30 compute-0 peaceful_swanson[146146]: 167 167
Feb 02 11:20:30 compute-0 systemd[1]: libpod-cbc04c26118cdf95cc3b7db564286ae9419a52a85699f7c8c8c10f6e4c51542c.scope: Deactivated successfully.
Feb 02 11:20:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112030 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:20:30 compute-0 podman[146053]: 2026-02-02 11:20:30.274898986 +0000 UTC m=+0.765181871 container attach cbc04c26118cdf95cc3b7db564286ae9419a52a85699f7c8c8c10f6e4c51542c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swanson, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:20:30 compute-0 podman[146053]: 2026-02-02 11:20:30.277013906 +0000 UTC m=+0.767296771 container died cbc04c26118cdf95cc3b7db564286ae9419a52a85699f7c8c8c10f6e4c51542c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:20:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cddf321cf935ac2186c24e30829bd0dcacb095d5c1b3d93740c76ed99e14a783-merged.mount: Deactivated successfully.
Feb 02 11:20:30 compute-0 sudo[146313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxrkmeppsoulpmhifwvtpfiiuvafotbf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770031230.1208415-402-189027213081462/AnsiballZ_edpm_nftables_from_files.py'
Feb 02 11:20:30 compute-0 sudo[146313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:30 compute-0 podman[146053]: 2026-02-02 11:20:30.659614402 +0000 UTC m=+1.149897257 container remove cbc04c26118cdf95cc3b7db564286ae9419a52a85699f7c8c8c10f6e4c51542c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swanson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:20:30 compute-0 systemd[1]: libpod-conmon-cbc04c26118cdf95cc3b7db564286ae9419a52a85699f7c8c8c10f6e4c51542c.scope: Deactivated successfully.
Feb 02 11:20:30 compute-0 python3[146315]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 02 11:20:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:30 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:30 compute-0 sudo[146313]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:30 compute-0 podman[146323]: 2026-02-02 11:20:30.872011918 +0000 UTC m=+0.118871474 container create 25502bdc6f59166c29b170c2db2129708a1d8b963e5853c0a0aa91fcc77b6398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:20:30 compute-0 podman[146323]: 2026-02-02 11:20:30.780832241 +0000 UTC m=+0.027691827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:20:30 compute-0 systemd[1]: Started libpod-conmon-25502bdc6f59166c29b170c2db2129708a1d8b963e5853c0a0aa91fcc77b6398.scope.
Feb 02 11:20:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728d8c2323efcd40571aa4b670c920e194371ed71d568876b87a712d4048ef77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728d8c2323efcd40571aa4b670c920e194371ed71d568876b87a712d4048ef77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728d8c2323efcd40571aa4b670c920e194371ed71d568876b87a712d4048ef77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728d8c2323efcd40571aa4b670c920e194371ed71d568876b87a712d4048ef77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:31 compute-0 podman[146323]: 2026-02-02 11:20:31.019354678 +0000 UTC m=+0.266214254 container init 25502bdc6f59166c29b170c2db2129708a1d8b963e5853c0a0aa91fcc77b6398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:20:31 compute-0 podman[146323]: 2026-02-02 11:20:31.02822245 +0000 UTC m=+0.275082016 container start 25502bdc6f59166c29b170c2db2129708a1d8b963e5853c0a0aa91fcc77b6398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:20:31 compute-0 podman[146323]: 2026-02-02 11:20:31.101399256 +0000 UTC m=+0.348258812 container attach 25502bdc6f59166c29b170c2db2129708a1d8b963e5853c0a0aa91fcc77b6398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:20:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:31 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]: {
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:     "1": [
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:         {
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "devices": [
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "/dev/loop3"
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             ],
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "lv_name": "ceph_lv0",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "lv_size": "21470642176",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "name": "ceph_lv0",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "tags": {
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.cluster_name": "ceph",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.crush_device_class": "",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.encrypted": "0",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.osd_id": "1",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.type": "block",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.vdo": "0",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:                 "ceph.with_tpm": "0"
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             },
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "type": "block",
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:             "vg_name": "ceph_vg0"
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:         }
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]:     ]
Feb 02 11:20:31 compute-0 xenodochial_blackburn[146365]: }
Feb 02 11:20:31 compute-0 sudo[146499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ococuckjpmhedaxieilurprymvddijlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031231.0108016-426-231913819805123/AnsiballZ_stat.py'
Feb 02 11:20:31 compute-0 sudo[146499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:31 compute-0 ceph-mon[74676]: pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:20:31 compute-0 systemd[1]: libpod-25502bdc6f59166c29b170c2db2129708a1d8b963e5853c0a0aa91fcc77b6398.scope: Deactivated successfully.
Feb 02 11:20:31 compute-0 podman[146323]: 2026-02-02 11:20:31.342674012 +0000 UTC m=+0.589533568 container died 25502bdc6f59166c29b170c2db2129708a1d8b963e5853c0a0aa91fcc77b6398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Feb 02 11:20:31 compute-0 python3.9[146501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:31 compute-0 sudo[146499]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:20:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:31 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-728d8c2323efcd40571aa4b670c920e194371ed71d568876b87a712d4048ef77-merged.mount: Deactivated successfully.
Feb 02 11:20:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:31.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:31 compute-0 podman[146323]: 2026-02-02 11:20:31.841043232 +0000 UTC m=+1.087902778 container remove 25502bdc6f59166c29b170c2db2129708a1d8b963e5853c0a0aa91fcc77b6398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:20:31 compute-0 sudo[145935]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:31 compute-0 systemd[1]: libpod-conmon-25502bdc6f59166c29b170c2db2129708a1d8b963e5853c0a0aa91fcc77b6398.scope: Deactivated successfully.
Feb 02 11:20:31 compute-0 sudo[146567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:20:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:31.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:31 compute-0 sudo[146567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:31 compute-0 sudo[146567]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:31 compute-0 sudo[146616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:20:32 compute-0 sudo[146616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:32 compute-0 sudo[146688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uetgiutlnwtlibydrdaohnxlucuijknk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031231.0108016-426-231913819805123/AnsiballZ_copy.py'
Feb 02 11:20:32 compute-0 sudo[146688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:32 compute-0 python3.9[146690]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031231.0108016-426-231913819805123/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:32 compute-0 sudo[146688]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:32 compute-0 podman[146756]: 2026-02-02 11:20:32.443367882 +0000 UTC m=+0.103237190 container create e5f35f2798860b130157a886f8435e1262ed64d15a538e44c4e32ec56f7cf39c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:20:32 compute-0 podman[146756]: 2026-02-02 11:20:32.362697723 +0000 UTC m=+0.022567051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:20:32 compute-0 ceph-mon[74676]: pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:20:32 compute-0 systemd[1]: Started libpod-conmon-e5f35f2798860b130157a886f8435e1262ed64d15a538e44c4e32ec56f7cf39c.scope.
Feb 02 11:20:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:20:32 compute-0 podman[146756]: 2026-02-02 11:20:32.5289869 +0000 UTC m=+0.188856208 container init e5f35f2798860b130157a886f8435e1262ed64d15a538e44c4e32ec56f7cf39c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:20:32 compute-0 podman[146756]: 2026-02-02 11:20:32.534329672 +0000 UTC m=+0.194198980 container start e5f35f2798860b130157a886f8435e1262ed64d15a538e44c4e32ec56f7cf39c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb 02 11:20:32 compute-0 epic_driscoll[146825]: 167 167
Feb 02 11:20:32 compute-0 systemd[1]: libpod-e5f35f2798860b130157a886f8435e1262ed64d15a538e44c4e32ec56f7cf39c.scope: Deactivated successfully.
Feb 02 11:20:32 compute-0 podman[146756]: 2026-02-02 11:20:32.575293994 +0000 UTC m=+0.235163322 container attach e5f35f2798860b130157a886f8435e1262ed64d15a538e44c4e32ec56f7cf39c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_driscoll, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:20:32 compute-0 podman[146756]: 2026-02-02 11:20:32.576510078 +0000 UTC m=+0.236379386 container died e5f35f2798860b130157a886f8435e1262ed64d15a538e44c4e32ec56f7cf39c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_driscoll, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:20:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6120a8a14add86dd2d9fa5c28c05f032350e676cbc2baf97b6e3a9cb69c566a-merged.mount: Deactivated successfully.
Feb 02 11:20:32 compute-0 sudo[146917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egyqwyixdcntuvusnfjuddgbfsnswjqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031232.4181743-471-275622953399862/AnsiballZ_stat.py'
Feb 02 11:20:32 compute-0 sudo[146917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:32 compute-0 podman[146756]: 2026-02-02 11:20:32.8081287 +0000 UTC m=+0.467998018 container remove e5f35f2798860b130157a886f8435e1262ed64d15a538e44c4e32ec56f7cf39c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:20:32 compute-0 systemd[1]: libpod-conmon-e5f35f2798860b130157a886f8435e1262ed64d15a538e44c4e32ec56f7cf39c.scope: Deactivated successfully.
Feb 02 11:20:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:32 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:32 compute-0 python3.9[146919]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:32 compute-0 podman[146928]: 2026-02-02 11:20:32.949542922 +0000 UTC m=+0.054449986 container create 0d31a46556d6988823e32402bfeb9eeb4807d071d763851e180b416f38d62112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_moser, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:20:32 compute-0 sudo[146917]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:33 compute-0 podman[146928]: 2026-02-02 11:20:32.919019996 +0000 UTC m=+0.023927080 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:20:33 compute-0 systemd[1]: Started libpod-conmon-0d31a46556d6988823e32402bfeb9eeb4807d071d763851e180b416f38d62112.scope.
Feb 02 11:20:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c31ed7b7742b8f7fa3f5a198f048057614d1e8e595930de117f8ec2bdb4dac5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c31ed7b7742b8f7fa3f5a198f048057614d1e8e595930de117f8ec2bdb4dac5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c31ed7b7742b8f7fa3f5a198f048057614d1e8e595930de117f8ec2bdb4dac5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c31ed7b7742b8f7fa3f5a198f048057614d1e8e595930de117f8ec2bdb4dac5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:20:33 compute-0 podman[146928]: 2026-02-02 11:20:33.121139561 +0000 UTC m=+0.226046645 container init 0d31a46556d6988823e32402bfeb9eeb4807d071d763851e180b416f38d62112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_moser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb 02 11:20:33 compute-0 podman[146928]: 2026-02-02 11:20:33.127386848 +0000 UTC m=+0.232293912 container start 0d31a46556d6988823e32402bfeb9eeb4807d071d763851e180b416f38d62112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:20:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:33 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:33 compute-0 podman[146928]: 2026-02-02 11:20:33.151303237 +0000 UTC m=+0.256210321 container attach 0d31a46556d6988823e32402bfeb9eeb4807d071d763851e180b416f38d62112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:20:33 compute-0 sudo[147072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwvglldaojrhtbrlvwktsymkyjskxmxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031232.4181743-471-275622953399862/AnsiballZ_copy.py'
Feb 02 11:20:33 compute-0 sudo[147072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:33 compute-0 python3.9[147074]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031232.4181743-471-275622953399862/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:33 compute-0 sudo[147072]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:20:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:33 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:33.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:33 compute-0 lvm[147229]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:20:33 compute-0 lvm[147229]: VG ceph_vg0 finished
Feb 02 11:20:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:33.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:33 compute-0 relaxed_moser[146969]: {}
Feb 02 11:20:33 compute-0 sudo[147298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncbjcyxjgqthhkemszpujkbcxtbkmpdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031233.7230554-516-120320083291344/AnsiballZ_stat.py'
Feb 02 11:20:33 compute-0 sudo[147298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:34 compute-0 systemd[1]: libpod-0d31a46556d6988823e32402bfeb9eeb4807d071d763851e180b416f38d62112.scope: Deactivated successfully.
Feb 02 11:20:34 compute-0 systemd[1]: libpod-0d31a46556d6988823e32402bfeb9eeb4807d071d763851e180b416f38d62112.scope: Consumed 1.113s CPU time.
Feb 02 11:20:34 compute-0 podman[146928]: 2026-02-02 11:20:34.012602035 +0000 UTC m=+1.117509099 container died 0d31a46556d6988823e32402bfeb9eeb4807d071d763851e180b416f38d62112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_moser, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:20:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c31ed7b7742b8f7fa3f5a198f048057614d1e8e595930de117f8ec2bdb4dac5-merged.mount: Deactivated successfully.
Feb 02 11:20:34 compute-0 podman[146928]: 2026-02-02 11:20:34.1470755 +0000 UTC m=+1.251982574 container remove 0d31a46556d6988823e32402bfeb9eeb4807d071d763851e180b416f38d62112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:20:34 compute-0 systemd[1]: libpod-conmon-0d31a46556d6988823e32402bfeb9eeb4807d071d763851e180b416f38d62112.scope: Deactivated successfully.
Feb 02 11:20:34 compute-0 python3.9[147300]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:34 compute-0 sudo[146616]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:20:34 compute-0 sudo[147298]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:20:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:20:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:20:34 compute-0 sudo[147367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:20:34 compute-0 sudo[147367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:34 compute-0 sudo[147367]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:34 compute-0 sudo[147459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utchwfyszwbeqyltcqxljicwljfxbzwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031233.7230554-516-120320083291344/AnsiballZ_copy.py'
Feb 02 11:20:34 compute-0 sudo[147459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:34 compute-0 python3.9[147461]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031233.7230554-516-120320083291344/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:34 compute-0 sudo[147459]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:34 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:35 compute-0 ceph-mon[74676]: pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:20:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:20:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:20:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:35 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:35 compute-0 sudo[147612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyowvjnohivixkhpzzyvmwyefdhiokah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031234.928346-561-251430234509806/AnsiballZ_stat.py'
Feb 02 11:20:35 compute-0 sudo[147612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:35 compute-0 python3.9[147614]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:35 compute-0 sudo[147612]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:20:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:35 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:35 compute-0 sudo[147738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llmbubabeoshkxvuoimrrpvkbebyoovz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031234.928346-561-251430234509806/AnsiballZ_copy.py'
Feb 02 11:20:35 compute-0 sudo[147738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:20:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:35.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:20:35 compute-0 python3.9[147740]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031234.928346-561-251430234509806/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:35 compute-0 sudo[147738]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:35.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:36 compute-0 ceph-mon[74676]: pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:20:36 compute-0 sudo[147890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djqywjscmpmxmbesnvjyfxcankuxdlyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031236.171559-606-220018946053289/AnsiballZ_stat.py'
Feb 02 11:20:36 compute-0 sudo[147890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Feb 02 11:20:36 compute-0 python3.9[147892]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:36 compute-0 sudo[147890]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:36 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:20:36.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:20:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:36] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:20:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:36] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:20:37 compute-0 sudo[148016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvvpfgfuqtuyopelkhvvzddvckfktzru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031236.171559-606-220018946053289/AnsiballZ_copy.py'
Feb 02 11:20:37 compute-0 sudo[148016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:37 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:37 compute-0 python3.9[148018]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031236.171559-606-220018946053289/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:37 compute-0 sudo[148016]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:20:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:37 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:37 compute-0 sudo[148169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izvjybpngwypvhvvpvjhxkjysgbhmzba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031237.4577565-651-13489514023265/AnsiballZ_file.py'
Feb 02 11:20:37 compute-0 sudo[148169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:37.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:37 compute-0 python3.9[148171]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:37 compute-0 sudo[148169]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:37.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:38 compute-0 sudo[148321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ownejjmopxfpqnhmtbfxkhsgfuflqsaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031238.0619051-675-65712206428024/AnsiballZ_command.py'
Feb 02 11:20:38 compute-0 sudo[148321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:38 compute-0 python3.9[148323]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:20:38 compute-0 sudo[148321]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:38 compute-0 ceph-mon[74676]: pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:20:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:38 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:39 compute-0 sudo[148477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlptqeeewkwkodpplxkirdpfqkipdrzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031238.7036912-699-189163456188866/AnsiballZ_blockinfile.py'
Feb 02 11:20:39 compute-0 sudo[148477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:39 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:39 compute-0 python3.9[148479]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:39 compute-0 sudo[148477]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:20:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:39 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:39.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:39 compute-0 sudo[148630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuunhedjoomeirrauzjacimouwogatxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031239.5744715-726-14154468596799/AnsiballZ_command.py'
Feb 02 11:20:39 compute-0 sudo[148630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:20:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:39.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:20:39 compute-0 python3.9[148632]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:20:39 compute-0 sudo[148630]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112040 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:20:40 compute-0 sudo[148783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skbkrmoftdtntlnzytjcvqtyoqguudjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031240.1777217-750-25367761048517/AnsiballZ_stat.py'
Feb 02 11:20:40 compute-0 sudo[148783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:40 compute-0 python3.9[148785]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:20:40 compute-0 sudo[148783]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:40 compute-0 ceph-mon[74676]: pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:20:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:40 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:41 compute-0 sudo[148938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwnzftriruxlznecnekwyewhbtyrqezz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031240.8112597-774-166724001570466/AnsiballZ_command.py'
Feb 02 11:20:41 compute-0 sudo[148938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:41 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:41 compute-0 python3.9[148940]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:20:41 compute-0 sudo[148938]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:20:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:41 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:41 compute-0 sudo[149094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmiqqzowbeiqhorfenroaypnmbhezfvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031241.4763074-798-196291462011789/AnsiballZ_file.py'
Feb 02 11:20:41 compute-0 sudo[149094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:41.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:41 compute-0 python3.9[149096]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:41 compute-0 sudo[149094]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:20:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:41.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:20:42 compute-0 sudo[149121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:20:42 compute-0 sudo[149121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:20:42 compute-0 sudo[149121]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:42 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:42 compute-0 ceph-mon[74676]: pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:20:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:43 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:43 compute-0 python3.9[149272]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:20:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:43 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76e800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:43.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:44 compute-0 sudo[149424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxvpoaumrlaigihdlrozinsjkugvjdma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031244.0341158-918-43751591665470/AnsiballZ_command.py'
Feb 02 11:20:44 compute-0 sudo[149424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:44 compute-0 python3.9[149426]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:20:44 compute-0 ovs-vsctl[149427]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb 02 11:20:44 compute-0 sudo[149424]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:20:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:20:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:44 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:44 compute-0 ceph-mon[74676]: pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:20:44 compute-0 sudo[149580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmhaepfqazmwqyvjkcahymxwtenizfri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031244.716691-945-190742432210458/AnsiballZ_command.py'
Feb 02 11:20:44 compute-0 sudo[149580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:45 compute-0 python3.9[149582]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:20:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:45 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:45 compute-0 sudo[149580]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:45 compute-0 sudo[149736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shnzvjlcslyhqdqgdhsunyohaiurjmzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031245.3628917-969-95724655702158/AnsiballZ_command.py'
Feb 02 11:20:45 compute-0 sudo[149736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:20:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:45 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:45.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:45 compute-0 python3.9[149738]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:20:45 compute-0 ovs-vsctl[149739]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb 02 11:20:45 compute-0 sudo[149736]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:45.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:46 compute-0 python3.9[149889]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:20:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:46 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8001810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:46 compute-0 ceph-mon[74676]: pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:20:46 compute-0 sudo[150042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtwfqrmhfeojigqteylpgottbpuaedgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031246.7212424-1020-231719567262658/AnsiballZ_file.py'
Feb 02 11:20:46 compute-0 sudo[150042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:20:46.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:20:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:46] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Feb 02 11:20:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:46] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Feb 02 11:20:47 compute-0 python3.9[150044]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:20:47 compute-0 sudo[150042]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:47 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:47 compute-0 sudo[150195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzhjewjfpotxucjtzeyzdimodmbcsres ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031247.3738391-1044-109134415485526/AnsiballZ_stat.py'
Feb 02 11:20:47 compute-0 sudo[150195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:47 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:47.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:47 compute-0 python3.9[150197]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:47 compute-0 sudo[150195]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:47.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:48 compute-0 sudo[150273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmmdgzjcfhcggfslccsdrjbdbiugjgnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031247.3738391-1044-109134415485526/AnsiballZ_file.py'
Feb 02 11:20:48 compute-0 sudo[150273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:48 compute-0 python3.9[150275]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:20:48 compute-0 sudo[150273]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:48 compute-0 sudo[150425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qypfdrgzfdxzjgmbwtmsudqbmtbcwxvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031248.3389053-1044-235614681567373/AnsiballZ_stat.py'
Feb 02 11:20:48 compute-0 sudo[150425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:48 compute-0 python3.9[150427]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:48 compute-0 sudo[150425]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:48 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:48 compute-0 ceph-mon[74676]: pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:48 compute-0 sudo[150504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toqiurueonmtwhguysaafbzqfecpfheg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031248.3389053-1044-235614681567373/AnsiballZ_file.py'
Feb 02 11:20:48 compute-0 sudo[150504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:49 compute-0 python3.9[150506]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:20:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:49 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8001810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:49 compute-0 sudo[150504]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:49 compute-0 sudo[150657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxneokakntxnlepihosatxwxeqalgumj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031249.392241-1113-180719415573499/AnsiballZ_file.py'
Feb 02 11:20:49 compute-0 sudo[150657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:49 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:49 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:20:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:49 compute-0 python3.9[150659]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:49 compute-0 sudo[150657]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:49.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:50 compute-0 sudo[150809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zblufbigfiplpoxnzmzxflulkcqnajdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031250.0463014-1137-26055057876442/AnsiballZ_stat.py'
Feb 02 11:20:50 compute-0 sudo[150809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:50 compute-0 python3.9[150811]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:50 compute-0 sudo[150809]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:50 compute-0 sudo[150887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noijdpejahapxecrgwltmvrnbnrkvvsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031250.0463014-1137-26055057876442/AnsiballZ_file.py'
Feb 02 11:20:50 compute-0 sudo[150887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:50 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:50 compute-0 python3.9[150889]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:50 compute-0 sudo[150887]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:51 compute-0 ceph-mon[74676]: pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:20:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:51 compute-0 sudo[151040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oygfgmwfoelbbxsccwypliwlipqdfbey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031251.1572359-1173-43006332116543/AnsiballZ_stat.py'
Feb 02 11:20:51 compute-0 sudo[151040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:51 compute-0 python3.9[151042]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:51 compute-0 sudo[151040]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:20:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:51 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c80019b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:51 compute-0 sudo[151119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euimnvzhirmkbphgeonifyoakamndokk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031251.1572359-1173-43006332116543/AnsiballZ_file.py'
Feb 02 11:20:51 compute-0 sudo[151119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:51.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:51.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:51 compute-0 python3.9[151121]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:51 compute-0 sudo[151119]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:52 compute-0 sudo[151271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnllkhbpufwbanfyiykrvlyxigxprgcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031252.196572-1209-33545554246532/AnsiballZ_systemd.py'
Feb 02 11:20:52 compute-0 sudo[151271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:52 compute-0 python3.9[151273]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:20:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:52 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:20:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:52 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:20:52 compute-0 systemd[1]: Reloading.
Feb 02 11:20:52 compute-0 systemd-rc-local-generator[151302]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:20:52 compute-0 systemd-sysv-generator[151305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:20:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:52 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:53 compute-0 sudo[151271]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:53 compute-0 ceph-mon[74676]: pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:20:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:53 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:53 compute-0 sudo[151462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlalomrskuymdynfhmfvuzugkrbjtwuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031253.2121894-1233-3655780511855/AnsiballZ_stat.py'
Feb 02 11:20:53 compute-0 sudo[151462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:53 compute-0 python3.9[151464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:20:53 compute-0 sudo[151462]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:53 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:53.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:53 compute-0 sudo[151541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtdjyxcqzlisdizfvwlhzbvzthuklwlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031253.2121894-1233-3655780511855/AnsiballZ_file.py'
Feb 02 11:20:53 compute-0 sudo[151541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:53.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:53 compute-0 python3.9[151543]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:54 compute-0 sudo[151541]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:54 compute-0 sudo[151695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcpcqzdkvdzrvfzhfvbhvkvhwlhzhmmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031254.2464998-1269-256989142472699/AnsiballZ_stat.py'
Feb 02 11:20:54 compute-0 sudo[151695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:54 compute-0 sshd-session[151544]: Received disconnect from 91.224.92.108 port 27596:11:  [preauth]
Feb 02 11:20:54 compute-0 sshd-session[151544]: Disconnected from authenticating user root 91.224.92.108 port 27596 [preauth]
Feb 02 11:20:54 compute-0 python3.9[151697]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:54 compute-0 sudo[151695]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:54 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c80033c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:54 compute-0 sudo[151774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwxorodebpkbvrwglraqbboxqudxkacv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031254.2464998-1269-256989142472699/AnsiballZ_file.py'
Feb 02 11:20:54 compute-0 sudo[151774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:55 compute-0 python3.9[151776]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:55 compute-0 sudo[151774]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:55 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:55 compute-0 ceph-mon[74676]: pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:20:55 compute-0 sudo[151927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykbxecrwqgnjjamfyzgrvbxfdfhcmiue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031255.3164742-1305-97213426803320/AnsiballZ_systemd.py'
Feb 02 11:20:55 compute-0 sudo[151927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:20:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:55 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:55 : epoch 698087df : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:20:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:55.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:55 compute-0 python3.9[151929]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:20:55 compute-0 systemd[1]: Reloading.
Feb 02 11:20:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:55.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:56 compute-0 systemd-rc-local-generator[151955]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:20:56 compute-0 systemd-sysv-generator[151958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:20:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:20:56 compute-0 systemd[1]: Starting Create netns directory...
Feb 02 11:20:56 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 02 11:20:56 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 02 11:20:56 compute-0 systemd[1]: Finished Create netns directory.
Feb 02 11:20:56 compute-0 ceph-mon[74676]: pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:20:56 compute-0 sudo[151927]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:56 compute-0 sudo[152120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvgiembhzjdpwfonrgobusyghacvaqlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031256.5915606-1335-2487654432762/AnsiballZ_file.py'
Feb 02 11:20:56 compute-0 sudo[152120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:56 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:20:56.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:20:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:20:56.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:20:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:56] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Feb 02 11:20:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:20:56] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Feb 02 11:20:57 compute-0 python3.9[152123]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:20:57 compute-0 sudo[152120]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c80033c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:57 compute-0 sudo[152273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ierxhueqicevqrtqrwdznstdtfxcwajl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031257.2119594-1359-254297856048795/AnsiballZ_stat.py'
Feb 02 11:20:57 compute-0 sudo[152273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:57 compute-0 python3.9[152275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:20:57 compute-0 sudo[152273]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:20:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:57 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:57.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:20:57 compute-0 sudo[152397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vknzrijysnscqweiiddscuowajgojcli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031257.2119594-1359-254297856048795/AnsiballZ_copy.py'
Feb 02 11:20:57 compute-0 sudo[152397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:20:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:57.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:20:58 compute-0 python3.9[152399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031257.2119594-1359-254297856048795/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:20:58 compute-0 sudo[152397]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:58 compute-0 ceph-mon[74676]: pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:20:58 compute-0 sudo[152549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkxjftslpiwoqfqjxvkauqovffifdquh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031258.5620837-1410-140336715070316/AnsiballZ_file.py'
Feb 02 11:20:58 compute-0 sudo[152549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:58 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:58 compute-0 python3.9[152551]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:20:58 compute-0 sudo[152549]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:59 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:59 compute-0 sudo[152702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfenpckffrfyujcsfwnksrxnbfgcfslj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031259.16559-1434-247974902151311/AnsiballZ_file.py'
Feb 02 11:20:59 compute-0 sudo[152702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:20:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:20:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:20:59 compute-0 python3.9[152704]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:20:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:20:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:20:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:20:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:20:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:20:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:20:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:20:59 compute-0 sudo[152702]: pam_unix(sudo:session): session closed for user root
Feb 02 11:20:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:20:59 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003ce0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:20:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:20:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:20:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:20:59.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:20:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:20:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:20:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:20:59.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:00 compute-0 sudo[152855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvvrfxqalkbbqllqreyoeakswzafrvhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031259.848482-1458-165980714636415/AnsiballZ_stat.py'
Feb 02 11:21:00 compute-0 sudo[152855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:00 compute-0 python3.9[152857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:00 compute-0 sudo[152855]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:00 compute-0 sudo[152978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkmxdlpqyqhouiecqaksoocgrbwmjmmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031259.848482-1458-165980714636415/AnsiballZ_copy.py'
Feb 02 11:21:00 compute-0 sudo[152978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:00 compute-0 python3.9[152980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031259.848482-1458-165980714636415/.source.json _original_basename=.rn8nn3pg follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:21:00 compute-0 ceph-mon[74676]: pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:21:00 compute-0 sudo[152978]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:00 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:01 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:01 compute-0 python3.9[153131]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:21:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:21:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:01 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:01.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:01.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112102 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:21:02 compute-0 sudo[153329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:21:02 compute-0 sudo[153329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:02 compute-0 sudo[153329]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:02 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003ce0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:02 compute-0 ceph-mon[74676]: pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:21:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:03 compute-0 sudo[153579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtqtxfftyppidjljrmuyosmenltevquh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031262.911141-1578-3798781833301/AnsiballZ_container_config_data.py'
Feb 02 11:21:03 compute-0 sudo[153579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:03 compute-0 python3.9[153581]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb 02 11:21:03 compute-0 sudo[153579]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:21:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:03 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:03.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:03.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:04 compute-0 sudo[153732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdrgqxqjfdvjnzxagztxgdxzhsxhhapm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031263.9407232-1611-248753751147110/AnsiballZ_container_config_hash.py'
Feb 02 11:21:04 compute-0 sudo[153732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:04 compute-0 python3.9[153734]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 02 11:21:04 compute-0 sudo[153732]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:04 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:05 compute-0 ceph-mon[74676]: pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:21:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:05 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003ce0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:05 compute-0 sudo[153885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rptnkidlatezgctuzfiwynoorxlkcsyy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770031264.9163747-1641-28087067454006/AnsiballZ_edpm_container_manage.py'
Feb 02 11:21:05 compute-0 sudo[153885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:21:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:05 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:05 compute-0 python3[153887]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb 02 11:21:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:05.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:05.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:06 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:21:06.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:21:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:06] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Feb 02 11:21:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:06] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Feb 02 11:21:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:07 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76dc002170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:07 compute-0 ceph-mon[74676]: pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:21:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:07 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003ce0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:21:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:07.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:21:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:21:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:07.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:21:08 compute-0 ceph-mon[74676]: pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:08 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:09 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:09 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:09.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:09.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:10 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76c8003ce0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:10 compute-0 ceph-mon[74676]: pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:11 compute-0 podman[153901]: 2026-02-02 11:21:11.125508312 +0000 UTC m=+5.348258885 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 02 11:21:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:11 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76b0003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:11 compute-0 podman[154029]: 2026-02-02 11:21:11.215465165 +0000 UTC m=+0.019140474 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 02 11:21:11 compute-0 podman[154029]: 2026-02-02 11:21:11.592730469 +0000 UTC m=+0.396405758 container create daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 11:21:11 compute-0 python3[153887]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 02 11:21:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[123772]: 02/02/2026 11:21:11 : epoch 698087df : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f76bc004000 fd 39 proxy ignored for local
Feb 02 11:21:11 compute-0 kernel: ganesha.nfsd[143489]: segfault at 50 ip 00007f776b11532e sp 00007f76cf7fd210 error 4 in libntirpc.so.5.8[7f776b0fa000+2c000] likely on CPU 4 (core 0, socket 4)
Feb 02 11:21:11 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb 02 11:21:11 compute-0 systemd[1]: Started Process Core Dump (PID 154069/UID 0).
Feb 02 11:21:11 compute-0 sudo[153885]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:21:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:11.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:21:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:11.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:12 compute-0 sudo[154220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuccviqwsiosjtpoykmcojrphnfqlshv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031271.8622055-1665-260663855138333/AnsiballZ_stat.py'
Feb 02 11:21:12 compute-0 sudo[154220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:12 compute-0 python3.9[154222]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:21:12 compute-0 sudo[154220]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:13 compute-0 sudo[154375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqleojlbzfrsqnyipdrxpojhhkewumky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031273.03136-1692-178077252729832/AnsiballZ_file.py'
Feb 02 11:21:13 compute-0 sudo[154375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:21:13 compute-0 python3.9[154377]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:21:13 compute-0 sudo[154375]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:13.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:14 compute-0 sudo[154452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnxfdorufhuggfribnqjwtvachdnlizi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031273.03136-1692-178077252729832/AnsiballZ_stat.py'
Feb 02 11:21:14 compute-0 sudo[154452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:14 compute-0 systemd-coredump[154070]: Process 123776 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 64:
                                                    #0  0x00007f776b11532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Feb 02 11:21:14 compute-0 ceph-mon[74676]: pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:14 compute-0 systemd[1]: systemd-coredump@2-154069-0.service: Deactivated successfully.
Feb 02 11:21:14 compute-0 systemd[1]: systemd-coredump@2-154069-0.service: Consumed 1.081s CPU time.
Feb 02 11:21:14 compute-0 podman[154459]: 2026-02-02 11:21:14.209974456 +0000 UTC m=+0.030685001 container died 9ce82d1f9adaa453f03c050e3896b6a190fc1dae9914225767afdb104522a02d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:21:14 compute-0 python3.9[154454]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:21:14 compute-0 sudo[154452]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d6bf822c54781dd0e3bb6315d8286cdc82afcffbf91cc2ecd26d83a002d6063-merged.mount: Deactivated successfully.
Feb 02 11:21:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:21:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:21:14 compute-0 podman[154459]: 2026-02-02 11:21:14.61822216 +0000 UTC m=+0.438932705 container remove 9ce82d1f9adaa453f03c050e3896b6a190fc1dae9914225767afdb104522a02d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:21:14 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Main process exited, code=exited, status=139/n/a
Feb 02 11:21:14 compute-0 sudo[154634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgsijwlaojtzpcjiwaugyrcejljbrkat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031274.2912178-1692-43680515105133/AnsiballZ_copy.py'
Feb 02 11:21:14 compute-0 sudo[154634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:14 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Failed with result 'exit-code'.
Feb 02 11:21:14 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.633s CPU time.
Feb 02 11:21:14 compute-0 python3.9[154642]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031274.2912178-1692-43680515105133/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:21:14 compute-0 sudo[154634]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:15 compute-0 sudo[154726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riawuztssiuoaauqafbjpnrtpzpxfpzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031274.2912178-1692-43680515105133/AnsiballZ_systemd.py'
Feb 02 11:21:15 compute-0 sudo[154726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:15 compute-0 ceph-mon[74676]: pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:21:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:21:15 compute-0 python3.9[154728]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 11:21:15 compute-0 systemd[1]: Reloading.
Feb 02 11:21:15 compute-0 systemd-sysv-generator[154760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:21:15 compute-0 systemd-rc-local-generator[154757]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:21:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:21:15 compute-0 sudo[154726]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:15.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:15 compute-0 sudo[154838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onxhukgndguoizaefmpecvmutwpyyyta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031274.2912178-1692-43680515105133/AnsiballZ_systemd.py'
Feb 02 11:21:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:15.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:15 compute-0 sudo[154838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:16 compute-0 ceph-mon[74676]: pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:21:16 compute-0 python3.9[154840]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:21:16 compute-0 systemd[1]: Reloading.
Feb 02 11:21:16 compute-0 systemd-rc-local-generator[154871]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:21:16 compute-0 systemd-sysv-generator[154874]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:21:16 compute-0 systemd[1]: Starting ovn_controller container...
Feb 02 11:21:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:21:16.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:21:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:16] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Feb 02 11:21:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:16] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Feb 02 11:21:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d72c53daf24b9fd77d70c79efeeb3bcf4b07aece8dd7492944a174c971a85fcb/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:17 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b.
Feb 02 11:21:17 compute-0 podman[154883]: 2026-02-02 11:21:17.084390223 +0000 UTC m=+0.455482477 container init daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127)
Feb 02 11:21:17 compute-0 ovn_controller[154901]: + sudo -E kolla_set_configs
Feb 02 11:21:17 compute-0 podman[154883]: 2026-02-02 11:21:17.105681818 +0000 UTC m=+0.476774062 container start daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 02 11:21:17 compute-0 edpm-start-podman-container[154883]: ovn_controller
Feb 02 11:21:17 compute-0 systemd[1]: Created slice User Slice of UID 0.
Feb 02 11:21:17 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 02 11:21:17 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 02 11:21:17 compute-0 systemd[1]: Starting User Manager for UID 0...
Feb 02 11:21:17 compute-0 systemd[154939]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Feb 02 11:21:17 compute-0 edpm-start-podman-container[154882]: Creating additional drop-in dependency for "ovn_controller" (daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b)
Feb 02 11:21:17 compute-0 podman[154908]: 2026-02-02 11:21:17.183596789 +0000 UTC m=+0.068734322 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 02 11:21:17 compute-0 systemd[1]: daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b-251b483ffa220df6.service: Main process exited, code=exited, status=1/FAILURE
Feb 02 11:21:17 compute-0 systemd[1]: daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b-251b483ffa220df6.service: Failed with result 'exit-code'.
Feb 02 11:21:17 compute-0 systemd[1]: Reloading.
Feb 02 11:21:17 compute-0 systemd-rc-local-generator[154984]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:21:17 compute-0 systemd-sysv-generator[154988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:21:17 compute-0 systemd[154939]: Queued start job for default target Main User Target.
Feb 02 11:21:17 compute-0 systemd[154939]: Created slice User Application Slice.
Feb 02 11:21:17 compute-0 systemd[154939]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 02 11:21:17 compute-0 systemd[154939]: Started Daily Cleanup of User's Temporary Directories.
Feb 02 11:21:17 compute-0 systemd[154939]: Reached target Paths.
Feb 02 11:21:17 compute-0 systemd[154939]: Reached target Timers.
Feb 02 11:21:17 compute-0 systemd[154939]: Starting D-Bus User Message Bus Socket...
Feb 02 11:21:17 compute-0 systemd[154939]: Starting Create User's Volatile Files and Directories...
Feb 02 11:21:17 compute-0 systemd[154939]: Finished Create User's Volatile Files and Directories.
Feb 02 11:21:17 compute-0 systemd[154939]: Listening on D-Bus User Message Bus Socket.
Feb 02 11:21:17 compute-0 systemd[154939]: Reached target Sockets.
Feb 02 11:21:17 compute-0 systemd[154939]: Reached target Basic System.
Feb 02 11:21:17 compute-0 systemd[154939]: Reached target Main User Target.
Feb 02 11:21:17 compute-0 systemd[154939]: Startup finished in 145ms.
Feb 02 11:21:17 compute-0 systemd[1]: Started User Manager for UID 0.
Feb 02 11:21:17 compute-0 systemd[1]: Started ovn_controller container.
Feb 02 11:21:17 compute-0 systemd[1]: Started Session c1 of User root.
Feb 02 11:21:17 compute-0 sudo[154838]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:17 compute-0 ovn_controller[154901]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 02 11:21:17 compute-0 ovn_controller[154901]: INFO:__main__:Validating config file
Feb 02 11:21:17 compute-0 ovn_controller[154901]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 02 11:21:17 compute-0 ovn_controller[154901]: INFO:__main__:Writing out command to execute
Feb 02 11:21:17 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Feb 02 11:21:17 compute-0 ovn_controller[154901]: ++ cat /run_command
Feb 02 11:21:17 compute-0 ovn_controller[154901]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 02 11:21:17 compute-0 ovn_controller[154901]: + ARGS=
Feb 02 11:21:17 compute-0 ovn_controller[154901]: + sudo kolla_copy_cacerts
Feb 02 11:21:17 compute-0 systemd[1]: Started Session c2 of User root.
Feb 02 11:21:17 compute-0 ovn_controller[154901]: + [[ ! -n '' ]]
Feb 02 11:21:17 compute-0 ovn_controller[154901]: + . kolla_extend_start
Feb 02 11:21:17 compute-0 ovn_controller[154901]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 02 11:21:17 compute-0 ovn_controller[154901]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb 02 11:21:17 compute-0 ovn_controller[154901]: + umask 0022
Feb 02 11:21:17 compute-0 ovn_controller[154901]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb 02 11:21:17 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <info>  [1770031277.6112] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <info>  [1770031277.6121] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <warn>  [1770031277.6124] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <info>  [1770031277.6131] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <info>  [1770031277.6138] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <info>  [1770031277.6142] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 02 11:21:17 compute-0 kernel: br-int: entered promiscuous mode
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00022|main|INFO|OVS feature set changed, force recompute.
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 02 11:21:17 compute-0 ovn_controller[154901]: 2026-02-02T11:21:17Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <info>  [1770031277.6306] manager: (ovn-79129c-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <info>  [1770031277.6312] manager: (ovn-636270-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <info>  [1770031277.6318] manager: (ovn-182957-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Feb 02 11:21:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:21:17 compute-0 systemd-udevd[155035]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:21:17 compute-0 systemd-udevd[155036]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:21:17 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <info>  [1770031277.6533] device (genev_sys_6081): carrier: link connected
Feb 02 11:21:17 compute-0 NetworkManager[49067]: <info>  [1770031277.6538] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Feb 02 11:21:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:17.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:21:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:17.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:21:18 compute-0 python3.9[155165]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 02 11:21:18 compute-0 ceph-mon[74676]: pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:21:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112118 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:21:19 compute-0 sudo[155316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhmfmejwnihhptuumiancqhrdfguaomv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031278.9517913-1827-224686724951689/AnsiballZ_stat.py'
Feb 02 11:21:19 compute-0 sudo[155316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:19 compute-0 python3.9[155318]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:19 compute-0 sudo[155316]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:21:19 compute-0 sudo[155440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awsbnkzryralpqkldvqjmbqnsqognnxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031278.9517913-1827-224686724951689/AnsiballZ_copy.py'
Feb 02 11:21:19 compute-0 sudo[155440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:19.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:19 compute-0 python3.9[155442]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031278.9517913-1827-224686724951689/.source.yaml _original_basename=.92tnbq7g follow=False checksum=938cc52973dfd319182c1fafafe85bb928ea2a2d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:21:19 compute-0 sudo[155440]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:21:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:19.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:21:20 compute-0 sudo[155592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xomurrqihtbovrpngztextbifzypetdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031280.100739-1872-34106533352085/AnsiballZ_command.py'
Feb 02 11:21:20 compute-0 sudo[155592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:20 compute-0 python3.9[155594]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:21:20 compute-0 ovs-vsctl[155595]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb 02 11:21:20 compute-0 sudo[155592]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:20 compute-0 ceph-mon[74676]: pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:21:20 compute-0 sudo[155746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtkqwsugjpcnohcazdvmmoqnlstkxxfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031280.743826-1896-159974771500009/AnsiballZ_command.py'
Feb 02 11:21:21 compute-0 sudo[155746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:21 compute-0 python3.9[155748]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:21:21 compute-0 ovs-vsctl[155750]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb 02 11:21:21 compute-0 sudo[155746]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:21:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:21:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6346 writes, 26K keys, 6346 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6346 writes, 1090 syncs, 5.82 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6346 writes, 26K keys, 6346 commit groups, 1.0 writes per commit group, ingest: 19.64 MB, 0.03 MB/s
                                           Interval WAL: 6346 writes, 1090 syncs, 5.82 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 11:21:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:21.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:21.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:22 compute-0 sudo[155902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyeptbarhmcugumpgmzzbhyvjiszfbsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031281.9688263-1938-176302162908114/AnsiballZ_command.py'
Feb 02 11:21:22 compute-0 sudo[155902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:22 compute-0 sudo[155904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:21:22 compute-0 sudo[155904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:22 compute-0 sudo[155904]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:22 compute-0 python3.9[155908]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:21:22 compute-0 ovs-vsctl[155930]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb 02 11:21:22 compute-0 sudo[155902]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:22 compute-0 sshd-session[143434]: Connection closed by 192.168.122.30 port 54540
Feb 02 11:21:22 compute-0 sshd-session[143431]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:21:22 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Feb 02 11:21:22 compute-0 systemd[1]: session-50.scope: Consumed 52.086s CPU time.
Feb 02 11:21:22 compute-0 systemd-logind[793]: Session 50 logged out. Waiting for processes to exit.
Feb 02 11:21:22 compute-0 systemd-logind[793]: Removed session 50.
Feb 02 11:21:23 compute-0 ceph-mon[74676]: pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:21:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:21:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:23.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:23.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:24 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Scheduled restart job, restart counter is at 3.
Feb 02 11:21:24 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:21:24 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.633s CPU time.
Feb 02 11:21:24 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:21:25 compute-0 ceph-mon[74676]: pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:21:25 compute-0 podman[156007]: 2026-02-02 11:21:25.16334695 +0000 UTC m=+0.062056132 container create 73084ba91b37e224c4e40d2346727f06385ade97ac8f938705cff67b24bc764c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:21:25 compute-0 podman[156007]: 2026-02-02 11:21:25.124628981 +0000 UTC m=+0.023338183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7d50841c2d8dd9f42cbc598723afd263514aafe9cb9f06fd043921b035b24a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7d50841c2d8dd9f42cbc598723afd263514aafe9cb9f06fd043921b035b24a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7d50841c2d8dd9f42cbc598723afd263514aafe9cb9f06fd043921b035b24a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7d50841c2d8dd9f42cbc598723afd263514aafe9cb9f06fd043921b035b24a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.lrvhze-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:25 compute-0 podman[156007]: 2026-02-02 11:21:25.281516534 +0000 UTC m=+0.180225736 container init 73084ba91b37e224c4e40d2346727f06385ade97ac8f938705cff67b24bc764c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:21:25 compute-0 podman[156007]: 2026-02-02 11:21:25.285989531 +0000 UTC m=+0.184698703 container start 73084ba91b37e224c4e40d2346727f06385ade97ac8f938705cff67b24bc764c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:21:25 compute-0 bash[156007]: 73084ba91b37e224c4e40d2346727f06385ade97ac8f938705cff67b24bc764c
Feb 02 11:21:25 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:21:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb 02 11:21:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb 02 11:21:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb 02 11:21:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb 02 11:21:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb 02 11:21:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb 02 11:21:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb 02 11:21:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:21:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:25.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:21:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:25.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:21:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:21:26.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:21:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:26] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Feb 02 11:21:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:26] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Feb 02 11:21:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:27 compute-0 systemd[1]: Stopping User Manager for UID 0...
Feb 02 11:21:27 compute-0 systemd[154939]: Activating special unit Exit the Session...
Feb 02 11:21:27 compute-0 systemd[154939]: Stopped target Main User Target.
Feb 02 11:21:27 compute-0 systemd[154939]: Stopped target Basic System.
Feb 02 11:21:27 compute-0 systemd[154939]: Stopped target Paths.
Feb 02 11:21:27 compute-0 systemd[154939]: Stopped target Sockets.
Feb 02 11:21:27 compute-0 systemd[154939]: Stopped target Timers.
Feb 02 11:21:27 compute-0 systemd[154939]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 02 11:21:27 compute-0 systemd[154939]: Closed D-Bus User Message Bus Socket.
Feb 02 11:21:27 compute-0 systemd[154939]: Stopped Create User's Volatile Files and Directories.
Feb 02 11:21:27 compute-0 systemd[154939]: Removed slice User Application Slice.
Feb 02 11:21:27 compute-0 systemd[154939]: Reached target Shutdown.
Feb 02 11:21:27 compute-0 systemd[154939]: Finished Exit the Session.
Feb 02 11:21:27 compute-0 systemd[154939]: Reached target Exit the Session.
Feb 02 11:21:27 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Feb 02 11:21:27 compute-0 systemd[1]: Stopped User Manager for UID 0.
Feb 02 11:21:27 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 02 11:21:27 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 02 11:21:27 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 02 11:21:27 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 02 11:21:27 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Feb 02 11:21:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:27.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:28.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:28 compute-0 ceph-mon[74676]: pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:29 compute-0 ceph-mon[74676]: pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:29 compute-0 sshd-session[156071]: Accepted publickey for zuul from 192.168.122.30 port 39196 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:21:29 compute-0 systemd-logind[793]: New session 52 of user zuul.
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:21:29
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.rgw.root', 'backups', 'default.rgw.log', '.nfs', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:21:29 compute-0 systemd[1]: Started Session 52 of User zuul.
Feb 02 11:21:29 compute-0 sshd-session[156071]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:21:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:21:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:21:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:21:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:29.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:30.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:21:30 compute-0 python3.9[156225]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:21:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:31 compute-0 ceph-mon[74676]: pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:31 compute-0 sudo[156380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lplnjxcokiqzwgprkuelfvngcumiriis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031291.024694-57-271613386281540/AnsiballZ_file.py'
Feb 02 11:21:31 compute-0 sudo[156380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:21:31 compute-0 python3.9[156382]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:31 compute-0 sudo[156380]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:31 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:21:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:31 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:21:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:31.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:32.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:32 compute-0 sudo[156533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhypzajvlpshryobzephjzxuvnbfluah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031291.833843-57-274734656348254/AnsiballZ_file.py'
Feb 02 11:21:32 compute-0 sudo[156533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:32 compute-0 ceph-mon[74676]: pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:21:32 compute-0 python3.9[156535]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:32 compute-0 sudo[156533]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:32 compute-0 sudo[156685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrcrnwilpbuetsjcszdtiuxqexyrlffz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031292.41633-57-255272273512602/AnsiballZ_file.py'
Feb 02 11:21:32 compute-0 sudo[156685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:32 compute-0 python3.9[156687]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:32 compute-0 sudo[156685]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:33 compute-0 sudo[156838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nymusszsllukmdapacdgbtpjzjofujew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031293.003131-57-235972538619381/AnsiballZ_file.py'
Feb 02 11:21:33 compute-0 sudo[156838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:33 compute-0 python3.9[156840]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:33 compute-0 sudo[156838]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:21:33 compute-0 sudo[156991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jishticeexpjdaisujezhmstzhwcxluo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031293.5700061-57-118282280295701/AnsiballZ_file.py'
Feb 02 11:21:33 compute-0 sudo[156991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:33.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:34.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:34 compute-0 python3.9[156993]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:34 compute-0 sudo[156991]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:34 compute-0 sudo[157093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:21:34 compute-0 sudo[157093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:34 compute-0 sudo[157093]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:34 compute-0 sudo[157141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Feb 02 11:21:34 compute-0 sudo[157141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:34 compute-0 ceph-mon[74676]: pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:21:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:21:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:21:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:21:34 compute-0 sudo[157141]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:21:34 compute-0 python3.9[157193]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:21:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:21:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:21:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 sudo[157218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:21:35 compute-0 sudo[157218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:35 compute-0 sudo[157218]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:35 compute-0 sudo[157264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:21:35 compute-0 sudo[157264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:35 compute-0 sudo[157437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leycsnvzxoqgiovkwfzozsxacpwmvcat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031295.1496787-189-157265147726005/AnsiballZ_seboolean.py'
Feb 02 11:21:35 compute-0 sudo[157437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:35 compute-0 sudo[157264]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:21:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:21:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:21:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:21:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:21:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:21:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:21:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:21:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:21:35 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:21:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:21:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:21:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:21:35 compute-0 sudo[157449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:21:35 compute-0 sudo[157449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:35 compute-0 sudo[157449]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:35 compute-0 sudo[157475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:21:35 compute-0 sudo[157475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:35 compute-0 python3.9[157445]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 02 11:21:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:35.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:21:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:21:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:36.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:36 compute-0 podman[157541]: 2026-02-02 11:21:36.102786042 +0000 UTC m=+0.037663360 container create d6e7c27dae0627d6c785344641b2fdc9b6145012defadd7caefc7b1f0879ea18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:21:36 compute-0 systemd[1]: Started libpod-conmon-d6e7c27dae0627d6c785344641b2fdc9b6145012defadd7caefc7b1f0879ea18.scope.
Feb 02 11:21:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:21:36 compute-0 podman[157541]: 2026-02-02 11:21:36.084063891 +0000 UTC m=+0.018941229 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:21:36 compute-0 podman[157541]: 2026-02-02 11:21:36.185986193 +0000 UTC m=+0.120863531 container init d6e7c27dae0627d6c785344641b2fdc9b6145012defadd7caefc7b1f0879ea18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_herschel, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Feb 02 11:21:36 compute-0 podman[157541]: 2026-02-02 11:21:36.194392122 +0000 UTC m=+0.129269450 container start d6e7c27dae0627d6c785344641b2fdc9b6145012defadd7caefc7b1f0879ea18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:21:36 compute-0 podman[157541]: 2026-02-02 11:21:36.198800937 +0000 UTC m=+0.133678275 container attach d6e7c27dae0627d6c785344641b2fdc9b6145012defadd7caefc7b1f0879ea18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb 02 11:21:36 compute-0 systemd[1]: libpod-d6e7c27dae0627d6c785344641b2fdc9b6145012defadd7caefc7b1f0879ea18.scope: Deactivated successfully.
Feb 02 11:21:36 compute-0 epic_herschel[157557]: 167 167
Feb 02 11:21:36 compute-0 conmon[157557]: conmon d6e7c27dae0627d6c785 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6e7c27dae0627d6c785344641b2fdc9b6145012defadd7caefc7b1f0879ea18.scope/container/memory.events
Feb 02 11:21:36 compute-0 podman[157541]: 2026-02-02 11:21:36.205356653 +0000 UTC m=+0.140233961 container died d6e7c27dae0627d6c785344641b2fdc9b6145012defadd7caefc7b1f0879ea18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:21:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9e3bc7e3552049c43a0c60a75778eb619a113382627c1ec67b447f7ba146d04-merged.mount: Deactivated successfully.
Feb 02 11:21:36 compute-0 podman[157541]: 2026-02-02 11:21:36.248440106 +0000 UTC m=+0.183317424 container remove d6e7c27dae0627d6c785344641b2fdc9b6145012defadd7caefc7b1f0879ea18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_herschel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 02 11:21:36 compute-0 systemd[1]: libpod-conmon-d6e7c27dae0627d6c785344641b2fdc9b6145012defadd7caefc7b1f0879ea18.scope: Deactivated successfully.
Feb 02 11:21:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112136 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:21:36 compute-0 sudo[157437]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:36 compute-0 podman[157583]: 2026-02-02 11:21:36.40360592 +0000 UTC m=+0.060707924 container create 6d055020e559f9abe76924334c4b0ecd809c10306f156b31922fb50766197632 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:21:36 compute-0 systemd[1]: Started libpod-conmon-6d055020e559f9abe76924334c4b0ecd809c10306f156b31922fb50766197632.scope.
Feb 02 11:21:36 compute-0 podman[157583]: 2026-02-02 11:21:36.369801211 +0000 UTC m=+0.026903245 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:21:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4049c7d33396b29fd458862583a1df5c0df683c22ddb9d9e649b7d5f653d04e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4049c7d33396b29fd458862583a1df5c0df683c22ddb9d9e649b7d5f653d04e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4049c7d33396b29fd458862583a1df5c0df683c22ddb9d9e649b7d5f653d04e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4049c7d33396b29fd458862583a1df5c0df683c22ddb9d9e649b7d5f653d04e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4049c7d33396b29fd458862583a1df5c0df683c22ddb9d9e649b7d5f653d04e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:36 compute-0 podman[157583]: 2026-02-02 11:21:36.500266684 +0000 UTC m=+0.157368708 container init 6d055020e559f9abe76924334c4b0ecd809c10306f156b31922fb50766197632 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:21:36 compute-0 podman[157583]: 2026-02-02 11:21:36.506988714 +0000 UTC m=+0.164090718 container start 6d055020e559f9abe76924334c4b0ecd809c10306f156b31922fb50766197632 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:21:36 compute-0 podman[157583]: 2026-02-02 11:21:36.511112851 +0000 UTC m=+0.168214875 container attach 6d055020e559f9abe76924334c4b0ecd809c10306f156b31922fb50766197632 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:21:36 compute-0 practical_pare[157624]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:21:36 compute-0 practical_pare[157624]: --> All data devices are unavailable
Feb 02 11:21:36 compute-0 systemd[1]: libpod-6d055020e559f9abe76924334c4b0ecd809c10306f156b31922fb50766197632.scope: Deactivated successfully.
Feb 02 11:21:36 compute-0 podman[157583]: 2026-02-02 11:21:36.849800244 +0000 UTC m=+0.506902268 container died 6d055020e559f9abe76924334c4b0ecd809c10306f156b31922fb50766197632 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 11:21:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4049c7d33396b29fd458862583a1df5c0df683c22ddb9d9e649b7d5f653d04e0-merged.mount: Deactivated successfully.
Feb 02 11:21:36 compute-0 podman[157583]: 2026-02-02 11:21:36.889285845 +0000 UTC m=+0.546387849 container remove 6d055020e559f9abe76924334c4b0ecd809c10306f156b31922fb50766197632 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:21:36 compute-0 systemd[1]: libpod-conmon-6d055020e559f9abe76924334c4b0ecd809c10306f156b31922fb50766197632.scope: Deactivated successfully.
Feb 02 11:21:36 compute-0 sudo[157475]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:36 compute-0 ceph-mon[74676]: pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:21:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:21:36.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:21:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:36] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Feb 02 11:21:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:36] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Feb 02 11:21:37 compute-0 sudo[157779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:21:37 compute-0 sudo[157779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:37 compute-0 sudo[157779]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:37 compute-0 sudo[157804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:21:37 compute-0 sudo[157804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:37 compute-0 python3.9[157778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:37 compute-0 podman[157940]: 2026-02-02 11:21:37.411964098 +0000 UTC m=+0.041322813 container create 5a44738d9db1e4e8206f48c8e622a3b7f714532a40436604e9972d67e95a6498 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cohen, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:21:37 compute-0 systemd[1]: Started libpod-conmon-5a44738d9db1e4e8206f48c8e622a3b7f714532a40436604e9972d67e95a6498.scope.
Feb 02 11:21:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:21:37 compute-0 podman[157940]: 2026-02-02 11:21:37.394366839 +0000 UTC m=+0.023725574 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:21:37 compute-0 podman[157940]: 2026-02-02 11:21:37.501676605 +0000 UTC m=+0.131035340 container init 5a44738d9db1e4e8206f48c8e622a3b7f714532a40436604e9972d67e95a6498 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cohen, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:21:37 compute-0 podman[157940]: 2026-02-02 11:21:37.508083836 +0000 UTC m=+0.137442551 container start 5a44738d9db1e4e8206f48c8e622a3b7f714532a40436604e9972d67e95a6498 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:21:37 compute-0 podman[157940]: 2026-02-02 11:21:37.512314447 +0000 UTC m=+0.141673192 container attach 5a44738d9db1e4e8206f48c8e622a3b7f714532a40436604e9972d67e95a6498 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:21:37 compute-0 romantic_cohen[157980]: 167 167
Feb 02 11:21:37 compute-0 systemd[1]: libpod-5a44738d9db1e4e8206f48c8e622a3b7f714532a40436604e9972d67e95a6498.scope: Deactivated successfully.
Feb 02 11:21:37 compute-0 podman[157940]: 2026-02-02 11:21:37.516277039 +0000 UTC m=+0.145635774 container died 5a44738d9db1e4e8206f48c8e622a3b7f714532a40436604e9972d67e95a6498 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cohen, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 11:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-64e7827e9b66c0b6560649f289e1d3100714ae26629dbd47e8b032ffd9999534-merged.mount: Deactivated successfully.
Feb 02 11:21:37 compute-0 podman[157940]: 2026-02-02 11:21:37.555955255 +0000 UTC m=+0.185313970 container remove 5a44738d9db1e4e8206f48c8e622a3b7f714532a40436604e9972d67e95a6498 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cohen, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:21:37 compute-0 systemd[1]: libpod-conmon-5a44738d9db1e4e8206f48c8e622a3b7f714532a40436604e9972d67e95a6498.scope: Deactivated successfully.
Feb 02 11:21:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Feb 02 11:21:37 compute-0 podman[158034]: 2026-02-02 11:21:37.688324922 +0000 UTC m=+0.046192062 container create 6d730a5998e6df2f5e740c7004171a7d88e1ea123a50a6c600eb9336d24f8813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:21:37 compute-0 python3.9[158009]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031296.501837-213-33652333503416/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:37 compute-0 systemd[1]: Started libpod-conmon-6d730a5998e6df2f5e740c7004171a7d88e1ea123a50a6c600eb9336d24f8813.scope.
Feb 02 11:21:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d69fd265871a0e2c6c5e4cab45869bd0b54091e4930dea05d0faf6722a84dc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d69fd265871a0e2c6c5e4cab45869bd0b54091e4930dea05d0faf6722a84dc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d69fd265871a0e2c6c5e4cab45869bd0b54091e4930dea05d0faf6722a84dc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d69fd265871a0e2c6c5e4cab45869bd0b54091e4930dea05d0faf6722a84dc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:37 compute-0 podman[158034]: 2026-02-02 11:21:37.668149109 +0000 UTC m=+0.026016269 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:21:37 compute-0 podman[158034]: 2026-02-02 11:21:37.781337252 +0000 UTC m=+0.139204412 container init 6d730a5998e6df2f5e740c7004171a7d88e1ea123a50a6c600eb9336d24f8813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_archimedes, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 11:21:37 compute-0 podman[158034]: 2026-02-02 11:21:37.78971678 +0000 UTC m=+0.147583920 container start 6d730a5998e6df2f5e740c7004171a7d88e1ea123a50a6c600eb9336d24f8813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_archimedes, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:21:37 compute-0 podman[158034]: 2026-02-02 11:21:37.797975404 +0000 UTC m=+0.155842544 container attach 6d730a5998e6df2f5e740c7004171a7d88e1ea123a50a6c600eb9336d24f8813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb 02 11:21:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:21:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:37.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:21:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:38.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]: {
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:     "1": [
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:         {
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "devices": [
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "/dev/loop3"
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             ],
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "lv_name": "ceph_lv0",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "lv_size": "21470642176",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "name": "ceph_lv0",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "tags": {
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.cluster_name": "ceph",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.crush_device_class": "",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.encrypted": "0",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.osd_id": "1",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.type": "block",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.vdo": "0",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:                 "ceph.with_tpm": "0"
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             },
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "type": "block",
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:             "vg_name": "ceph_vg0"
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:         }
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]:     ]
Feb 02 11:21:38 compute-0 vigorous_archimedes[158051]: }
Feb 02 11:21:38 compute-0 systemd[1]: libpod-6d730a5998e6df2f5e740c7004171a7d88e1ea123a50a6c600eb9336d24f8813.scope: Deactivated successfully.
Feb 02 11:21:38 compute-0 podman[158034]: 2026-02-02 11:21:38.113993454 +0000 UTC m=+0.471860594 container died 6d730a5998e6df2f5e740c7004171a7d88e1ea123a50a6c600eb9336d24f8813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d69fd265871a0e2c6c5e4cab45869bd0b54091e4930dea05d0faf6722a84dc9-merged.mount: Deactivated successfully.
Feb 02 11:21:38 compute-0 podman[158034]: 2026-02-02 11:21:38.16247343 +0000 UTC m=+0.520340570 container remove 6d730a5998e6df2f5e740c7004171a7d88e1ea123a50a6c600eb9336d24f8813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb 02 11:21:38 compute-0 systemd[1]: libpod-conmon-6d730a5998e6df2f5e740c7004171a7d88e1ea123a50a6c600eb9336d24f8813.scope: Deactivated successfully.
Feb 02 11:21:38 compute-0 sudo[157804]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:38 compute-0 sudo[158234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:21:38 compute-0 sudo[158234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:38 compute-0 sudo[158234]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:38 compute-0 python3.9[158223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:38 compute-0 sudo[158259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:21:38 compute-0 sudo[158259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:38 compute-0 podman[158448]: 2026-02-02 11:21:38.69379431 +0000 UTC m=+0.037568848 container create 5b464a99b804f6c6c144f1340e08767030ca1b04e95369baf24ed28342b528ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:21:38 compute-0 systemd[1]: Started libpod-conmon-5b464a99b804f6c6c144f1340e08767030ca1b04e95369baf24ed28342b528ba.scope.
Feb 02 11:21:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:21:38 compute-0 podman[158448]: 2026-02-02 11:21:38.772436432 +0000 UTC m=+0.116211000 container init 5b464a99b804f6c6c144f1340e08767030ca1b04e95369baf24ed28342b528ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chandrasekhar, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:21:38 compute-0 podman[158448]: 2026-02-02 11:21:38.6772442 +0000 UTC m=+0.021018758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:21:38 compute-0 podman[158448]: 2026-02-02 11:21:38.779393779 +0000 UTC m=+0.123168317 container start 5b464a99b804f6c6c144f1340e08767030ca1b04e95369baf24ed28342b528ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chandrasekhar, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:21:38 compute-0 podman[158448]: 2026-02-02 11:21:38.782573719 +0000 UTC m=+0.126348257 container attach 5b464a99b804f6c6c144f1340e08767030ca1b04e95369baf24ed28342b528ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:21:38 compute-0 jovial_chandrasekhar[158464]: 167 167
Feb 02 11:21:38 compute-0 systemd[1]: libpod-5b464a99b804f6c6c144f1340e08767030ca1b04e95369baf24ed28342b528ba.scope: Deactivated successfully.
Feb 02 11:21:38 compute-0 podman[158448]: 2026-02-02 11:21:38.786037078 +0000 UTC m=+0.129811626 container died 5b464a99b804f6c6c144f1340e08767030ca1b04e95369baf24ed28342b528ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chandrasekhar, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:21:38 compute-0 python3.9[158434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031297.8921115-258-13458986008098/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a16b581ea5985a6da3c43674d39a6ae6e74eb18e67a62f4e7dee0e6b0afb3e0b-merged.mount: Deactivated successfully.
Feb 02 11:21:38 compute-0 podman[158448]: 2026-02-02 11:21:38.828831012 +0000 UTC m=+0.172605550 container remove 5b464a99b804f6c6c144f1340e08767030ca1b04e95369baf24ed28342b528ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chandrasekhar, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:21:38 compute-0 systemd[1]: libpod-conmon-5b464a99b804f6c6c144f1340e08767030ca1b04e95369baf24ed28342b528ba.scope: Deactivated successfully.
Feb 02 11:21:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:38 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb384000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:38 compute-0 podman[158515]: 2026-02-02 11:21:38.950251288 +0000 UTC m=+0.037425943 container create df05516ed4b386a444be06e9752ae4c3e639ae105d9583288361530da50067b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_elion, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:21:38 compute-0 systemd[1]: Started libpod-conmon-df05516ed4b386a444be06e9752ae4c3e639ae105d9583288361530da50067b1.scope.
Feb 02 11:21:38 compute-0 ceph-mon[74676]: pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Feb 02 11:21:39 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3039b5fbb93bca278a42c798745893e4e3b96ffe88d7ea6b7903ebdaea2b81fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3039b5fbb93bca278a42c798745893e4e3b96ffe88d7ea6b7903ebdaea2b81fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3039b5fbb93bca278a42c798745893e4e3b96ffe88d7ea6b7903ebdaea2b81fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3039b5fbb93bca278a42c798745893e4e3b96ffe88d7ea6b7903ebdaea2b81fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:21:39 compute-0 podman[158515]: 2026-02-02 11:21:39.02994858 +0000 UTC m=+0.117123255 container init df05516ed4b386a444be06e9752ae4c3e639ae105d9583288361530da50067b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:21:39 compute-0 podman[158515]: 2026-02-02 11:21:38.936426736 +0000 UTC m=+0.023601411 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:21:39 compute-0 podman[158515]: 2026-02-02 11:21:39.038516434 +0000 UTC m=+0.125691089 container start df05516ed4b386a444be06e9752ae4c3e639ae105d9583288361530da50067b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:21:39 compute-0 podman[158515]: 2026-02-02 11:21:39.041525059 +0000 UTC m=+0.128699714 container attach df05516ed4b386a444be06e9752ae4c3e639ae105d9583288361530da50067b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:21:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:39 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378000fb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:39 compute-0 sudo[158679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnblltqlivkmwsprwygbwbiwdugkyfur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031299.0910773-309-215761490244682/AnsiballZ_setup.py'
Feb 02 11:21:39 compute-0 sudo[158679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:39 compute-0 python3.9[158685]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:21:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Feb 02 11:21:39 compute-0 lvm[158737]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:21:39 compute-0 lvm[158737]: VG ceph_vg0 finished
Feb 02 11:21:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:39 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:39 compute-0 lvm[158745]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:21:39 compute-0 lvm[158745]: VG ceph_vg0 finished
Feb 02 11:21:39 compute-0 recursing_elion[158532]: {}
Feb 02 11:21:39 compute-0 systemd[1]: libpod-df05516ed4b386a444be06e9752ae4c3e639ae105d9583288361530da50067b1.scope: Deactivated successfully.
Feb 02 11:21:39 compute-0 systemd[1]: libpod-df05516ed4b386a444be06e9752ae4c3e639ae105d9583288361530da50067b1.scope: Consumed 1.055s CPU time.
Feb 02 11:21:39 compute-0 podman[158515]: 2026-02-02 11:21:39.771062715 +0000 UTC m=+0.858237370 container died df05516ed4b386a444be06e9752ae4c3e639ae105d9583288361530da50067b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_elion, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-3039b5fbb93bca278a42c798745893e4e3b96ffe88d7ea6b7903ebdaea2b81fb-merged.mount: Deactivated successfully.
Feb 02 11:21:39 compute-0 podman[158515]: 2026-02-02 11:21:39.819094878 +0000 UTC m=+0.906269533 container remove df05516ed4b386a444be06e9752ae4c3e639ae105d9583288361530da50067b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_elion, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:21:39 compute-0 systemd[1]: libpod-conmon-df05516ed4b386a444be06e9752ae4c3e639ae105d9583288361530da50067b1.scope: Deactivated successfully.
Feb 02 11:21:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:39.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:39 compute-0 sudo[158259]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:21:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:21:39 compute-0 sudo[158679]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:39 compute-0 sudo[158760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:21:39 compute-0 sudo[158760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:39 compute-0 sudo[158760]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:40.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:40 compute-0 sudo[158858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apxzjbhpzpzdjtpjhszjwlrvyquaferv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031299.0910773-309-215761490244682/AnsiballZ_dnf.py'
Feb 02 11:21:40 compute-0 sudo[158858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:40 compute-0 python3.9[158860]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:21:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:40 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112140 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:21:40 compute-0 ceph-mon[74676]: pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Feb 02 11:21:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:21:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:41 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 852 B/s wr, 2 op/s
Feb 02 11:21:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:41 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:41 compute-0 sudo[158858]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:41.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:42.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:42 compute-0 sudo[158940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:21:42 compute-0 sudo[158940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:21:42 compute-0 sudo[158940]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:42 compute-0 sudo[159038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skxzkkaxkqwmyzszjywbnghslxhulxch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031301.969124-345-143089832724629/AnsiballZ_systemd.py'
Feb 02 11:21:42 compute-0 sudo[159038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:42 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:42 compute-0 ceph-mon[74676]: pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 852 B/s wr, 2 op/s
Feb 02 11:21:43 compute-0 python3.9[159040]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 11:21:43 compute-0 sudo[159038]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:43 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3880023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:21:43 compute-0 python3.9[159194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:43 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:43.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:21:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:44.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:21:44 compute-0 python3.9[159316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031303.2771432-369-218251151971979/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:21:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:21:44 compute-0 python3.9[159466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:44 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:44 compute-0 ceph-mon[74676]: pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:21:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:21:45 compute-0 python3.9[159588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031304.2936141-369-3029148099862/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:45 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:21:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:45 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:21:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:45 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3880023e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:45.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:46.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:46 compute-0 python3.9[159739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:46 compute-0 python3.9[159860]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031305.902235-501-116202728352142/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:46 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368001ac0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:46 compute-0 ceph-mon[74676]: pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:21:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:46] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Feb 02 11:21:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:46] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Feb 02 11:21:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:21:46.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:21:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:21:46.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:21:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:47 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:47 compute-0 python3.9[160011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:47 compute-0 ovn_controller[154901]: 2026-02-02T11:21:47Z|00025|memory|INFO|16000 kB peak resident set size after 30.1 seconds
Feb 02 11:21:47 compute-0 ovn_controller[154901]: 2026-02-02T11:21:47Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Feb 02 11:21:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:47 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:47 compute-0 podman[160107]: 2026-02-02 11:21:47.717889781 +0000 UTC m=+0.101901612 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:21:47 compute-0 python3.9[160144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031306.9660263-501-251515235043494/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:21:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:47.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:21:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:48.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:48 compute-0 python3.9[160309]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:21:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:48 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:21:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:48 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:21:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:48 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3880023e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:48 compute-0 sudo[160462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrffppuvfoxvkknihshpgjgxpcxewypu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031308.6918218-615-59006295798853/AnsiballZ_file.py'
Feb 02 11:21:48 compute-0 sudo[160462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:48 compute-0 ceph-mon[74676]: pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:49 compute-0 python3.9[160464]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:49 compute-0 sudo[160462]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:49 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368001ac0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:49 compute-0 sudo[160615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muzrapfgxvcceclcxeswzxubkusnhpjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031309.326832-639-77049744440380/AnsiballZ_stat.py'
Feb 02 11:21:49 compute-0 sudo[160615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112149 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:21:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:49 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:49 compute-0 python3.9[160617]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:49 compute-0 sudo[160615]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:49.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:49 compute-0 sudo[160693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fomipexkgpocyccbperqopsmgoeiwsas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031309.326832-639-77049744440380/AnsiballZ_file.py'
Feb 02 11:21:49 compute-0 sudo[160693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:50.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:50 compute-0 python3.9[160695]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:50 compute-0 sudo[160693]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:50 compute-0 sudo[160845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlgwvbjkvbmdesbwgixepdesopmhhlfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031310.3591068-639-108120535724317/AnsiballZ_stat.py'
Feb 02 11:21:50 compute-0 sudo[160845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:50 compute-0 python3.9[160847]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:50 compute-0 sudo[160845]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:50 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:50 compute-0 sudo[160924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axkrfzxubucauzrgrjbvfnsgczniwvlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031310.3591068-639-108120535724317/AnsiballZ_file.py'
Feb 02 11:21:50 compute-0 sudo[160924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:50 compute-0 ceph-mon[74676]: pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:21:51 compute-0 python3.9[160926]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:51 compute-0 sudo[160924]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:51 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3880034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:51 compute-0 sudo[161076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsufcbotukoqgwrkafxdezfjritgmohz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031311.303969-708-222689581369201/AnsiballZ_file.py'
Feb 02 11:21:51 compute-0 sudo[161076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:21:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:51 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368002f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:51 compute-0 python3.9[161079]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:21:51 compute-0 sudo[161076]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:51.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:52.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:52 compute-0 sudo[161229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxhbojaksjbllnznpasmwtkanthdpkju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031311.9236963-732-140577516915403/AnsiballZ_stat.py'
Feb 02 11:21:52 compute-0 sudo[161229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:52 compute-0 python3.9[161231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:52 compute-0 sudo[161229]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:52 compute-0 sudo[161307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwaqfcvulsdiqprkzovymppanmmbdzje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031311.9236963-732-140577516915403/AnsiballZ_file.py'
Feb 02 11:21:52 compute-0 sudo[161307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:52 compute-0 python3.9[161309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:21:52 compute-0 sudo[161307]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:52 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:53 compute-0 ceph-mon[74676]: pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:21:53 compute-0 sudo[161460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtcdgidduwfjvwubxozxjoryuxmpfwso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031312.958421-768-256221371628808/AnsiballZ_stat.py'
Feb 02 11:21:53 compute-0 sudo[161460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:53 compute-0 python3.9[161462]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:53 compute-0 sudo[161460]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:53 compute-0 sudo[161539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iutbbaaqdbussdpkpzbihtjoonkhcusj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031312.958421-768-256221371628808/AnsiballZ_file.py'
Feb 02 11:21:53 compute-0 sudo[161539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:21:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3880034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:53 compute-0 python3.9[161541]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:21:53 compute-0 sudo[161539]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:53.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:54.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:54 compute-0 sudo[161691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppjkieisgbhazitgmfrdtudrnkjmtwui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031313.9665341-804-278023767429502/AnsiballZ_systemd.py'
Feb 02 11:21:54 compute-0 sudo[161691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:54 compute-0 python3.9[161693]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:21:54 compute-0 systemd[1]: Reloading.
Feb 02 11:21:54 compute-0 systemd-rc-local-generator[161717]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:21:54 compute-0 systemd-sysv-generator[161721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:21:54 compute-0 sudo[161691]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:54 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368002f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:55 compute-0 ceph-mon[74676]: pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:21:55 compute-0 sudo[161881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmnjtzanyiazkfwzgdrxrrxptgsgjzka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031314.9903028-828-145902715453077/AnsiballZ_stat.py'
Feb 02 11:21:55 compute-0 sudo[161881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:21:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:55 compute-0 python3.9[161883]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:55 compute-0 sudo[161881]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:55 compute-0 sudo[161960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nglutfgduvdkbvrzarjkkbmnwlsuhbqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031314.9903028-828-145902715453077/AnsiballZ_file.py'
Feb 02 11:21:55 compute-0 sudo[161960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:21:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:55 compute-0 python3.9[161962]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:21:55 compute-0 sudo[161960]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:55.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:56.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:21:56 compute-0 sudo[162112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihbleczkthrodkkeingbrvomjmvrgpdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031315.997591-864-265566374222849/AnsiballZ_stat.py'
Feb 02 11:21:56 compute-0 sudo[162112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:56 compute-0 python3.9[162114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:56 compute-0 sudo[162112]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:56 compute-0 sudo[162190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbhwkgvnwnjughmldvrsryhhrukhvgeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031315.997591-864-265566374222849/AnsiballZ_file.py'
Feb 02 11:21:56 compute-0 sudo[162190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:56 compute-0 python3.9[162192]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:21:56 compute-0 sudo[162190]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:56 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:56] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Feb 02 11:21:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:21:56] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Feb 02 11:21:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:21:56.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:21:57 compute-0 ceph-mon[74676]: pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:21:57 compute-0 sudo[162343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lprvyvooxhgivhmkkpzvrbpbpmiurkkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031316.993906-900-212822605525992/AnsiballZ_systemd.py'
Feb 02 11:21:57 compute-0 sudo[162343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:57 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:57 compute-0 python3.9[162345]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:21:57 compute-0 systemd[1]: Reloading.
Feb 02 11:21:57 compute-0 systemd-rc-local-generator[162374]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:21:57 compute-0 systemd-sysv-generator[162377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:21:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 682 B/s wr, 2 op/s
Feb 02 11:21:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:57 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:57 compute-0 systemd[1]: Starting Create netns directory...
Feb 02 11:21:57 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 02 11:21:57 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 02 11:21:57 compute-0 systemd[1]: Finished Create netns directory.
Feb 02 11:21:57 compute-0 sudo[162343]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:21:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:57.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:21:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:21:58.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:21:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:58 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:21:58 compute-0 sudo[162537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycroqrwhnmkvwskwvcxdswxliloxmzxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031318.1126623-930-78109173440015/AnsiballZ_file.py'
Feb 02 11:21:58 compute-0 sudo[162537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:58 compute-0 python3.9[162539]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:58 compute-0 sudo[162537]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:58 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:58 compute-0 sudo[162690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuwhqthhvrtudgcphkkfthxheamibjfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031318.7486188-954-206536846487155/AnsiballZ_stat.py'
Feb 02 11:21:58 compute-0 sudo[162690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:59 compute-0 ceph-mon[74676]: pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 682 B/s wr, 2 op/s
Feb 02 11:21:59 compute-0 python3.9[162692]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:21:59 compute-0 sudo[162690]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:59 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:59 compute-0 sudo[162813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcolbxcqxdqmhjkcgmeawwxxnjnpkkqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031318.7486188-954-206536846487155/AnsiballZ_copy.py'
Feb 02 11:21:59 compute-0 sudo[162813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:21:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:21:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:21:59 compute-0 python3.9[162815]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031318.7486188-954-206536846487155/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:21:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:21:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:21:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:21:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:21:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:21:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:21:59 compute-0 sudo[162813]: pam_unix(sudo:session): session closed for user root
Feb 02 11:21:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 682 B/s wr, 2 op/s
Feb 02 11:21:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:21:59 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:21:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:21:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:21:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:21:59.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:22:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:00.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:22:00 compute-0 sudo[162966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfibfrmfsgujzjffewiwgrbazlflcvqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031320.1687942-1005-146104986393199/AnsiballZ_file.py'
Feb 02 11:22:00 compute-0 sudo[162966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:00 compute-0 python3.9[162968]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:00 compute-0 sudo[162966]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:00 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:01 compute-0 ceph-mon[74676]: pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 682 B/s wr, 2 op/s
Feb 02 11:22:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:01 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:01 compute-0 sudo[163119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zivfnqsgleiqhvaluhectqhfthclcaju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031320.874854-1029-85380751065840/AnsiballZ_file.py'
Feb 02 11:22:01 compute-0 sudo[163119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:01 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:22:01 compute-0 python3.9[163121]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:22:01 compute-0 sudo[163119]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:22:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:01 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:22:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:01.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:22:02 compute-0 sudo[163272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaxdudpdypfzbtshwcxsmvznwcvpvgyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031321.7163005-1053-108390749125282/AnsiballZ_stat.py'
Feb 02 11:22:02 compute-0 sudo[163272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:02.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:02 compute-0 python3.9[163274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:22:02 compute-0 sudo[163272]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:02 compute-0 sudo[163345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:22:02 compute-0 sudo[163345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:02 compute-0 sudo[163345]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:02 compute-0 sudo[163420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcxkzasmcyhysdaskjkkjjtxvbpgrcxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031321.7163005-1053-108390749125282/AnsiballZ_copy.py'
Feb 02 11:22:02 compute-0 sudo[163420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:02 compute-0 python3.9[163422]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031321.7163005-1053-108390749125282/.source.json _original_basename=.zh9ok03d follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:02 compute-0 sudo[163420]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:02 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:03 compute-0 ceph-mon[74676]: pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:22:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:03 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:03 compute-0 python3.9[163573]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Feb 02 11:22:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:03 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:03.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:04.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112204 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:22:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:04 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:22:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:04 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:22:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:04 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:05 compute-0 ceph-mon[74676]: pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Feb 02 11:22:05 compute-0 sudo[163996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsbpzmstkywpclojtteztmdxezeueruk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031324.8523464-1173-53171595763662/AnsiballZ_container_config_data.py'
Feb 02 11:22:05 compute-0 sudo[163996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:05 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:05 compute-0 python3.9[163998]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb 02 11:22:05 compute-0 sudo[163996]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb 02 11:22:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:05 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:05.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:06.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:06 compute-0 sudo[164149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxgeyhstifpnufwdxicmbgaddinikhlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031325.7905405-1206-215097049466665/AnsiballZ_container_config_hash.py'
Feb 02 11:22:06 compute-0 sudo[164149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:06 compute-0 python3.9[164151]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 02 11:22:06 compute-0 sudo[164149]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:06 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:06] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Feb 02 11:22:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:06] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Feb 02 11:22:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:22:06.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:22:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:22:06.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:22:07 compute-0 ceph-mon[74676]: pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb 02 11:22:07 compute-0 sudo[164302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrlfkpqgzygcdhqrtbzqezmggnawlsya ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770031326.7419853-1236-24796195686862/AnsiballZ_edpm_container_manage.py'
Feb 02 11:22:07 compute-0 sudo[164302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:07 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:07 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:22:07 compute-0 python3[164304]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 02 11:22:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:22:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:07 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:07.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:08.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:08 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:09 compute-0 ceph-mon[74676]: pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:22:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:09 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112209 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:22:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:22:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:09 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:09.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:10.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:10 compute-0 ceph-mon[74676]: pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:22:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:10 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:11 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:22:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:11 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:22:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:11.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:22:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:12.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:12 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:13 compute-0 ceph-mon[74676]: pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:22:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:13 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:22:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:13 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360000d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:13.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:14.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:22:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:22:14 compute-0 ceph-mon[74676]: pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:22:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:14 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:15 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:15 compute-0 podman[164317]: 2026-02-02 11:22:15.380276334 +0000 UTC m=+7.848978041 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 11:22:15 compute-0 podman[164450]: 2026-02-02 11:22:15.503677466 +0000 UTC m=+0.048026284 container create cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:22:15 compute-0 podman[164450]: 2026-02-02 11:22:15.476536896 +0000 UTC m=+0.020885744 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 11:22:15 compute-0 python3[164304]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 11:22:15 compute-0 sudo[164302]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:22:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:22:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:15 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:15.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:15 compute-0 sudo[164639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liqumumzujoyqsvufdoodxbynegxixmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031335.768598-1260-109444996025335/AnsiballZ_stat.py'
Feb 02 11:22:16 compute-0 sudo[164639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:16.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:16 compute-0 python3.9[164641]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:22:16 compute-0 sudo[164639]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:16 compute-0 ceph-mon[74676]: pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:22:16 compute-0 sudo[164793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmppkvmexpouvhxcwauubqtovwebiusg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031336.47154-1287-162893250992114/AnsiballZ_file.py'
Feb 02 11:22:16 compute-0 sudo[164793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:16 compute-0 python3.9[164795]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:16 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:16 compute-0 sudo[164793]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:16] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:22:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:16] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:22:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:22:17.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:22:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:22:17.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:22:17 compute-0 sudo[164870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qadjlitqbzkxijbeiuznzttbuomjverk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031336.47154-1287-162893250992114/AnsiballZ_stat.py'
Feb 02 11:22:17 compute-0 sudo[164870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:17 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:17 compute-0 python3.9[164872]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:22:17 compute-0 sudo[164870]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:22:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:17 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:17 compute-0 sudo[165022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vntenozehdntypvhfhtmmjlfplrtkani ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031337.3706028-1287-272002540285682/AnsiballZ_copy.py'
Feb 02 11:22:17 compute-0 sudo[165022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:17.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:17 compute-0 podman[165024]: 2026-02-02 11:22:17.914875981 +0000 UTC m=+0.131742031 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 02 11:22:17 compute-0 python3.9[165025]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031337.3706028-1287-272002540285682/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:17 compute-0 sudo[165022]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:18.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:18 compute-0 sudo[165126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqiiiuajutrzhwttvhljknlnfgfkfeyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031337.3706028-1287-272002540285682/AnsiballZ_systemd.py'
Feb 02 11:22:18 compute-0 sudo[165126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:18 compute-0 python3.9[165128]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 11:22:18 compute-0 systemd[1]: Reloading.
Feb 02 11:22:18 compute-0 systemd-rc-local-generator[165154]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:22:18 compute-0 systemd-sysv-generator[165158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:22:18 compute-0 ceph-mon[74676]: pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:22:18 compute-0 sudo[165126]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:18 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:19 compute-0 sudo[165239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdvdehhwkwtemczfhlhynfrxkxcnasdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031337.3706028-1287-272002540285682/AnsiballZ_systemd.py'
Feb 02 11:22:19 compute-0 sudo[165239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:19 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:19 compute-0 python3.9[165241]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:22:19 compute-0 systemd[1]: Reloading.
Feb 02 11:22:19 compute-0 systemd-rc-local-generator[165271]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:22:19 compute-0 systemd-sysv-generator[165274]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:22:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:22:19 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Feb 02 11:22:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:19 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/812013b1e583ffe1d961f1ee6a96039091b87921f5eb7270d3cfd2fe6a2f2160/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/812013b1e583ffe1d961f1ee6a96039091b87921f5eb7270d3cfd2fe6a2f2160/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:19 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4.
Feb 02 11:22:19 compute-0 podman[165284]: 2026-02-02 11:22:19.827014171 +0000 UTC m=+0.122016714 container init cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: + sudo -E kolla_set_configs
Feb 02 11:22:19 compute-0 podman[165284]: 2026-02-02 11:22:19.848720337 +0000 UTC m=+0.143722860 container start cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 02 11:22:19 compute-0 edpm-start-podman-container[165284]: ovn_metadata_agent
Feb 02 11:22:19 compute-0 edpm-start-podman-container[165283]: Creating additional drop-in dependency for "ovn_metadata_agent" (cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4)
Feb 02 11:22:19 compute-0 podman[165306]: 2026-02-02 11:22:19.91649783 +0000 UTC m=+0.058465779 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:22:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:19.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:19 compute-0 systemd[1]: Reloading.
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Validating config file
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Copying service configuration files
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Writing out command to execute
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: ++ cat /run_command
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: + CMD=neutron-ovn-metadata-agent
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: + ARGS=
Feb 02 11:22:19 compute-0 ovn_metadata_agent[165299]: + sudo kolla_copy_cacerts
Feb 02 11:22:20 compute-0 ovn_metadata_agent[165299]: + [[ ! -n '' ]]
Feb 02 11:22:20 compute-0 ovn_metadata_agent[165299]: + . kolla_extend_start
Feb 02 11:22:20 compute-0 ovn_metadata_agent[165299]: Running command: 'neutron-ovn-metadata-agent'
Feb 02 11:22:20 compute-0 ovn_metadata_agent[165299]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb 02 11:22:20 compute-0 ovn_metadata_agent[165299]: + umask 0022
Feb 02 11:22:20 compute-0 ovn_metadata_agent[165299]: + exec neutron-ovn-metadata-agent
Feb 02 11:22:20 compute-0 systemd-rc-local-generator[165375]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:22:20 compute-0 systemd-sysv-generator[165380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:22:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:20.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:20 compute-0 systemd[1]: Started ovn_metadata_agent container.
Feb 02 11:22:20 compute-0 sudo[165239]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:20 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:21 compute-0 ceph-mon[74676]: pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:22:21 compute-0 python3.9[165538]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 02 11:22:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:21 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:22:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:21 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360002140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:21 compute-0 sudo[165692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgtavuuwnmvwphyozasrixjyezrariqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031341.5843253-1422-221493954516189/AnsiballZ_stat.py'
Feb 02 11:22:21 compute-0 sudo[165692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:21.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:22 compute-0 python3.9[165694]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:22:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:22.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:22 compute-0 sudo[165692]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:22 compute-0 sudo[165817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lutgybelkmaldvkvcqirbokfodbxqgev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031341.5843253-1422-221493954516189/AnsiballZ_copy.py'
Feb 02 11:22:22 compute-0 sudo[165817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:22 compute-0 sudo[165820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:22:22 compute-0 sudo[165820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:22 compute-0 sudo[165820]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:22 compute-0 python3.9[165819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031341.5843253-1422-221493954516189/.source.yaml _original_basename=.ztl8ho75 follow=False checksum=1b983d52519cbddc4320bc6d76f5386899d55b07 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:22 compute-0 sudo[165817]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.615 165304 INFO neutron.common.config [-] Logging enabled!
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.616 165304 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.616 165304 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.617 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.617 165304 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.617 165304 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.617 165304 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.617 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.617 165304 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.617 165304 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.617 165304 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.618 165304 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.618 165304 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.618 165304 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.618 165304 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.618 165304 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.618 165304 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.618 165304 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.618 165304 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.618 165304 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.619 165304 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.619 165304 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.619 165304 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.619 165304 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.619 165304 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.619 165304 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.619 165304 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.619 165304 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.619 165304 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.620 165304 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.620 165304 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.620 165304 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.620 165304 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.620 165304 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.620 165304 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.621 165304 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.621 165304 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.621 165304 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.621 165304 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.621 165304 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.621 165304 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.621 165304 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.622 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.622 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.622 165304 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.622 165304 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.622 165304 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.622 165304 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.622 165304 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.622 165304 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.622 165304 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.622 165304 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.623 165304 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.623 165304 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.623 165304 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.623 165304 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.623 165304 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.623 165304 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.623 165304 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.623 165304 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.623 165304 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.624 165304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.624 165304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.624 165304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.624 165304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.624 165304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.624 165304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.624 165304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.624 165304 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.624 165304 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.625 165304 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.625 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.625 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.625 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.625 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.625 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.625 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.626 165304 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.626 165304 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.626 165304 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.626 165304 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.626 165304 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.626 165304 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.626 165304 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.626 165304 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.626 165304 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.626 165304 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.627 165304 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.627 165304 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.627 165304 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.627 165304 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.627 165304 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.627 165304 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.627 165304 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.627 165304 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.627 165304 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.628 165304 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.628 165304 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.628 165304 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.628 165304 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.628 165304 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.628 165304 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.628 165304 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.628 165304 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.628 165304 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.628 165304 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.629 165304 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.629 165304 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.629 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.629 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.629 165304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.629 165304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.629 165304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.629 165304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.630 165304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.630 165304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.630 165304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.630 165304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.630 165304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.630 165304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.630 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.630 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.631 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.631 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.631 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.631 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.631 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.631 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.631 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.632 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.632 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.632 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.632 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.632 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.632 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.632 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.632 165304 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.633 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.633 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.633 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.633 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.633 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.633 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.633 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.633 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.633 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.634 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.634 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.634 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.634 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.634 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.634 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.634 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.634 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.634 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.635 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.635 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.635 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.635 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.635 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.635 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.635 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.635 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.636 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.636 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.636 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.636 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.636 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.636 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.636 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.637 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.637 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.637 165304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.637 165304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.637 165304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.637 165304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.637 165304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.637 165304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.637 165304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.638 165304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.638 165304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.638 165304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.638 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.638 165304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.638 165304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.638 165304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.638 165304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.639 165304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.639 165304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.639 165304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.639 165304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.639 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.639 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.639 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.639 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.639 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.640 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.640 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.640 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.640 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.640 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.640 165304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.640 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.640 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.641 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.641 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.641 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.641 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.641 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.641 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.641 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.641 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.641 165304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.641 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.642 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.642 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.642 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.642 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.642 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.642 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.642 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.642 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.642 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.643 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.643 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.643 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.643 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.643 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.643 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.643 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.643 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.644 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.644 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.644 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.644 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.644 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.644 165304 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.644 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.645 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.645 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.645 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.645 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.645 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.645 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.645 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.645 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.645 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.646 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.646 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.646 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.646 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.646 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.646 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.646 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.646 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.647 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.647 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.647 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.647 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.647 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.647 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.647 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.648 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.648 165304 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.648 165304 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.648 165304 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.648 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.648 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.648 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.649 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.649 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.649 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.649 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.649 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.649 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.649 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.649 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.649 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.650 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.650 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.650 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.650 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.650 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.650 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.650 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.650 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.650 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.651 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.651 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.651 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.651 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.651 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.651 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.651 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.651 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.651 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.652 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.652 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.652 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.652 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.652 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.652 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.652 165304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.652 165304 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.662 165304 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.662 165304 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.662 165304 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.663 165304 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.663 165304 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.677 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name e4587b97-1121-4d6d-b583-e59641a06362 (UUID: e4587b97-1121-4d6d-b583-e59641a06362) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.698 165304 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.698 165304 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.699 165304 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.699 165304 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.701 165304 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.707 165304 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.711 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'e4587b97-1121-4d6d-b583-e59641a06362'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], external_ids={}, name=e4587b97-1121-4d6d-b583-e59641a06362, nb_cfg_timestamp=1770031285625, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.712 165304 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f6ed7535f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.713 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.714 165304 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.714 165304 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.714 165304 INFO oslo_service.service [-] Starting 1 workers
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.719 165304 DEBUG oslo_service.service [-] Started child 165869 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.722 165304 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpgceocaxh/privsep.sock']
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.722 165869 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-164831'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.743 165869 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.743 165869 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.743 165869 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.747 165869 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.753 165869 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 02 11:22:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:22.763 165869 INFO eventlet.wsgi.server [-] (165869) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Feb 02 11:22:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:22 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:22 compute-0 sshd-session[156074]: Connection closed by 192.168.122.30 port 39196
Feb 02 11:22:22 compute-0 sshd-session[156071]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:22:22 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Feb 02 11:22:22 compute-0 systemd[1]: session-52.scope: Consumed 51.146s CPU time.
Feb 02 11:22:22 compute-0 systemd-logind[793]: Session 52 logged out. Waiting for processes to exit.
Feb 02 11:22:22 compute-0 systemd-logind[793]: Removed session 52.
Feb 02 11:22:23 compute-0 ceph-mon[74676]: pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:22:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:23 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:23 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb 02 11:22:23 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:23.402 165304 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 02 11:22:23 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:23.402 165304 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgceocaxh/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 02 11:22:23 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:23.265 165875 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 11:22:23 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:23.271 165875 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 11:22:23 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:23.273 165875 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Feb 02 11:22:23 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:23.273 165875 INFO oslo.privsep.daemon [-] privsep daemon running as pid 165875
Feb 02 11:22:23 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:23.405 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ff9062-e2e9-4f29-8f44-6643286eb464]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:22:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:23 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:23.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:23 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:23.953 165875 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:22:23 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:23.954 165875 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:22:23 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:23.954 165875 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:22:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:24.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.531 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9ae44f-4a06-491b-a439-a2cd39aac5cc]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.533 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, column=external_ids, values=({'neutron:ovn-metadata-id': '3d2e992b-b708-55be-b99d-d3d0ce31838d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.542 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.550 165304 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.550 165304 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.550 165304 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.550 165304 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.551 165304 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.551 165304 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.551 165304 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.551 165304 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.551 165304 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.551 165304 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.552 165304 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.552 165304 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.552 165304 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.552 165304 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.552 165304 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.552 165304 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.552 165304 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.552 165304 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.553 165304 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.553 165304 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.553 165304 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.553 165304 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.553 165304 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.553 165304 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.553 165304 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.553 165304 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.554 165304 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.554 165304 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.554 165304 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.554 165304 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.554 165304 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.554 165304 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.554 165304 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.554 165304 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.554 165304 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.555 165304 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.555 165304 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.555 165304 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.555 165304 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.555 165304 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.555 165304 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.555 165304 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.555 165304 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.556 165304 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.556 165304 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.556 165304 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.556 165304 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.556 165304 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.556 165304 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.556 165304 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.556 165304 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.557 165304 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.558 165304 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.558 165304 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.558 165304 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.558 165304 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.558 165304 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.558 165304 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.558 165304 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.558 165304 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.558 165304 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.559 165304 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.559 165304 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.559 165304 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.559 165304 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.559 165304 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.559 165304 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.559 165304 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.559 165304 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.560 165304 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.560 165304 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.560 165304 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.560 165304 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.560 165304 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.560 165304 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.560 165304 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.560 165304 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.560 165304 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.561 165304 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.561 165304 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.561 165304 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.561 165304 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.561 165304 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.561 165304 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.561 165304 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.561 165304 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.561 165304 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.561 165304 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.562 165304 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.562 165304 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.562 165304 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.562 165304 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.562 165304 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.562 165304 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.562 165304 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.562 165304 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.562 165304 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.562 165304 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.563 165304 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.563 165304 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.563 165304 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.563 165304 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.563 165304 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.563 165304 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.563 165304 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.563 165304 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.563 165304 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.564 165304 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.564 165304 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.564 165304 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.564 165304 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.564 165304 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.564 165304 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.564 165304 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.564 165304 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.564 165304 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.565 165304 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.565 165304 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.565 165304 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.565 165304 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.565 165304 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.565 165304 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.565 165304 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.565 165304 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.566 165304 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.566 165304 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.566 165304 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.566 165304 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.566 165304 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.566 165304 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.566 165304 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.566 165304 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.567 165304 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.567 165304 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.567 165304 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.567 165304 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.567 165304 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.567 165304 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.567 165304 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.567 165304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.567 165304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.567 165304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.568 165304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.568 165304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.568 165304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.568 165304 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.568 165304 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.568 165304 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.568 165304 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.568 165304 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.568 165304 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.568 165304 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.569 165304 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.570 165304 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.570 165304 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.570 165304 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.570 165304 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.570 165304 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.570 165304 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.570 165304 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.570 165304 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.570 165304 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.571 165304 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.571 165304 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.571 165304 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.571 165304 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.571 165304 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.571 165304 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.571 165304 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.571 165304 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.572 165304 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.572 165304 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.572 165304 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.572 165304 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.572 165304 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.572 165304 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.572 165304 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.572 165304 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.572 165304 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.572 165304 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.573 165304 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.573 165304 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.573 165304 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.573 165304 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.573 165304 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.573 165304 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.573 165304 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.573 165304 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.573 165304 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.573 165304 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.574 165304 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.574 165304 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.574 165304 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.574 165304 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.574 165304 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.574 165304 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.574 165304 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.574 165304 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.574 165304 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.574 165304 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.575 165304 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.575 165304 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.575 165304 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.575 165304 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.575 165304 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.575 165304 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.575 165304 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.575 165304 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.575 165304 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.575 165304 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.576 165304 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.576 165304 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.576 165304 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.576 165304 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.576 165304 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.576 165304 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.576 165304 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.576 165304 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.576 165304 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.576 165304 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.577 165304 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.577 165304 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.577 165304 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.577 165304 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.577 165304 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.577 165304 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.577 165304 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.577 165304 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.577 165304 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.577 165304 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.578 165304 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.578 165304 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.578 165304 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.578 165304 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.578 165304 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.578 165304 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.578 165304 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.578 165304 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.578 165304 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.579 165304 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.579 165304 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.579 165304 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.579 165304 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.579 165304 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.579 165304 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.579 165304 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.579 165304 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.579 165304 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.580 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.580 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.580 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.580 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.580 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.580 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.580 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.580 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.580 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.581 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.581 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.581 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.581 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.581 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.581 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.581 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.581 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.581 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.581 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.582 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.582 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.582 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.582 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.582 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.582 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.582 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.582 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.582 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.583 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.583 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.583 165304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.583 165304 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.583 165304 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.583 165304 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.583 165304 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:22:24 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:22:24.584 165304 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 02 11:22:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:24 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360002140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:25 compute-0 ceph-mon[74676]: pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:25.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:26.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:26 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:26] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:22:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:26] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Feb 02 11:22:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:22:27.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:22:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:22:27.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:22:27 compute-0 ceph-mon[74676]: pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:27 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360002140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:27 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:27.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:28.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:28 compute-0 ceph-mon[74676]: pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:28 compute-0 sshd-session[165885]: Accepted publickey for zuul from 192.168.122.30 port 39732 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:22:28 compute-0 systemd-logind[793]: New session 53 of user zuul.
Feb 02 11:22:28 compute-0 systemd[1]: Started Session 53 of User zuul.
Feb 02 11:22:28 compute-0 sshd-session[165885]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:22:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:28 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:29 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:29 compute-0 python3.9[166039]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:22:29
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', '.mgr', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms', 'volumes', 'default.rgw.control']
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:22:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:22:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:22:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:22:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:22:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:29 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:29.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:30.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:30 compute-0 sudo[166194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iftwgmbpjmxvbvwfwftwztdqpezspjin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031349.8254583-57-93941982846040/AnsiballZ_command.py'
Feb 02 11:22:30 compute-0 sudo[166194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:30 compute-0 python3.9[166196]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:22:30 compute-0 sudo[166194]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:30 compute-0 ceph-mon[74676]: pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:30 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:31 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:22:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:31 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:31 compute-0 sudo[166361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqudzmzopzmsnoecevmlwqacoldeirbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031351.303156-90-157282867795586/AnsiballZ_systemd_service.py'
Feb 02 11:22:31 compute-0 sudo[166361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:31.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:32.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:32 compute-0 python3.9[166363]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 11:22:32 compute-0 systemd[1]: Reloading.
Feb 02 11:22:32 compute-0 systemd-rc-local-generator[166390]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:22:32 compute-0 systemd-sysv-generator[166394]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:22:32 compute-0 sudo[166361]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:32 compute-0 ceph-mon[74676]: pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:22:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:32 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:33 compute-0 python3.9[166550]: ansible-ansible.builtin.service_facts Invoked
Feb 02 11:22:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:33 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:33 compute-0 network[166567]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 11:22:33 compute-0 network[166568]: 'network-scripts' will be removed from distribution in near future.
Feb 02 11:22:33 compute-0 network[166569]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 11:22:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:33 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:33.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:34.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:34 compute-0 ceph-mon[74676]: pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:34 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:35 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:35 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:35.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:36.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:36 compute-0 sudo[166832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juextrlbnlzaeujvohwqtjwdwqexaugl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031356.4275017-147-25739103230955/AnsiballZ_systemd_service.py'
Feb 02 11:22:36 compute-0 sudo[166832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:36 compute-0 ceph-mon[74676]: pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:36 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:36] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Feb 02 11:22:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:36] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Feb 02 11:22:36 compute-0 python3.9[166834]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:22:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:22:37.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:22:37 compute-0 sudo[166832]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:37 compute-0 sudo[166986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owpixsblvmogickvzywvbjfvnbpjdktc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031357.150759-147-164892219807632/AnsiballZ_systemd_service.py'
Feb 02 11:22:37 compute-0 sudo[166986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:37 compute-0 python3.9[166988]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:22:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:37 compute-0 sudo[166986]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:37.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:38.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:38 compute-0 sudo[167140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxhbsweibxfnfvmmpcrdcyfwobgzonkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031357.8652546-147-84446069087764/AnsiballZ_systemd_service.py'
Feb 02 11:22:38 compute-0 sudo[167140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:38 compute-0 python3.9[167142]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:22:38 compute-0 sudo[167140]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:38 compute-0 ceph-mon[74676]: pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:38 compute-0 sudo[167293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emxpyagwpauiidlnakuscoaluxutmhdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031358.5429325-147-40803904827083/AnsiballZ_systemd_service.py'
Feb 02 11:22:38 compute-0 sudo[167293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:38 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:39 compute-0 python3.9[167295]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:22:39 compute-0 sudo[167293]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:39 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:39 compute-0 sudo[167447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avthlkxcsgkupueywrgkbduoreafmeht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031359.2546737-147-45451810490106/AnsiballZ_systemd_service.py'
Feb 02 11:22:39 compute-0 sudo[167447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:39 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:39 compute-0 python3.9[167449]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:22:39 compute-0 sudo[167447]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:39.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:40.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:40 compute-0 sudo[167553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:22:40 compute-0 sudo[167553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:40 compute-0 sudo[167553]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:40 compute-0 sudo[167647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyjekjfhourvozdraycbfrftszsnlwmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031359.980323-147-204901170397836/AnsiballZ_systemd_service.py'
Feb 02 11:22:40 compute-0 sudo[167647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:40 compute-0 sudo[167605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:22:40 compute-0 sudo[167605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:40 compute-0 python3.9[167651]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:22:40 compute-0 sudo[167647]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:40 compute-0 sudo[167605]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:22:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:22:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:22:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:22:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:22:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:22:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:22:40 compute-0 ceph-mon[74676]: pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:22:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:22:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:22:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:22:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:22:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:22:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:22:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:22:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:22:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:22:40 compute-0 sudo[167785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:22:40 compute-0 sudo[167785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:40 compute-0 sudo[167785]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:40 compute-0 sudo[167813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:22:40 compute-0 sudo[167813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:40 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:40 compute-0 sudo[167885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkpdumcnwnadbqjhkhvayafrnxaiqdtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031360.7233896-147-13564466443620/AnsiballZ_systemd_service.py'
Feb 02 11:22:40 compute-0 sudo[167885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:41 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:41 compute-0 python3.9[167887]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:22:41 compute-0 podman[167929]: 2026-02-02 11:22:41.30165791 +0000 UTC m=+0.048583741 container create f380ffb6901b5238442b22bced93cbeb289f26c55975f33f413299ede7ea5295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_chatterjee, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:22:41 compute-0 systemd[1]: Started libpod-conmon-f380ffb6901b5238442b22bced93cbeb289f26c55975f33f413299ede7ea5295.scope.
Feb 02 11:22:41 compute-0 sudo[167885]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:41 compute-0 podman[167929]: 2026-02-02 11:22:41.274388044 +0000 UTC m=+0.021313895 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:22:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:22:41 compute-0 podman[167929]: 2026-02-02 11:22:41.388916183 +0000 UTC m=+0.135842034 container init f380ffb6901b5238442b22bced93cbeb289f26c55975f33f413299ede7ea5295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_chatterjee, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb 02 11:22:41 compute-0 podman[167929]: 2026-02-02 11:22:41.397977304 +0000 UTC m=+0.144903135 container start f380ffb6901b5238442b22bced93cbeb289f26c55975f33f413299ede7ea5295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:22:41 compute-0 podman[167929]: 2026-02-02 11:22:41.408111256 +0000 UTC m=+0.155037107 container attach f380ffb6901b5238442b22bced93cbeb289f26c55975f33f413299ede7ea5295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb 02 11:22:41 compute-0 romantic_chatterjee[167946]: 167 167
Feb 02 11:22:41 compute-0 systemd[1]: libpod-f380ffb6901b5238442b22bced93cbeb289f26c55975f33f413299ede7ea5295.scope: Deactivated successfully.
Feb 02 11:22:41 compute-0 podman[167929]: 2026-02-02 11:22:41.410711081 +0000 UTC m=+0.157636922 container died f380ffb6901b5238442b22bced93cbeb289f26c55975f33f413299ede7ea5295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_chatterjee, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-46efc24f7d599081386c6f000572a6ef7ea493e45495ee93f85e22ed54c09c39-merged.mount: Deactivated successfully.
Feb 02 11:22:41 compute-0 podman[167929]: 2026-02-02 11:22:41.446241884 +0000 UTC m=+0.193167715 container remove f380ffb6901b5238442b22bced93cbeb289f26c55975f33f413299ede7ea5295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_chatterjee, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:22:41 compute-0 systemd[1]: libpod-conmon-f380ffb6901b5238442b22bced93cbeb289f26c55975f33f413299ede7ea5295.scope: Deactivated successfully.
Feb 02 11:22:41 compute-0 podman[167995]: 2026-02-02 11:22:41.585698642 +0000 UTC m=+0.045831031 container create 8339eb3e35c99020561b5f533998ebae9bb7adfa075de7f8c5c2b7bc6a00e661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:22:41 compute-0 systemd[1]: Started libpod-conmon-8339eb3e35c99020561b5f533998ebae9bb7adfa075de7f8c5c2b7bc6a00e661.scope.
Feb 02 11:22:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4038808384eed9ec7f430d5001ee5d851d42d83d746efe7fe2b86e66630b7cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4038808384eed9ec7f430d5001ee5d851d42d83d746efe7fe2b86e66630b7cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4038808384eed9ec7f430d5001ee5d851d42d83d746efe7fe2b86e66630b7cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4038808384eed9ec7f430d5001ee5d851d42d83d746efe7fe2b86e66630b7cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4038808384eed9ec7f430d5001ee5d851d42d83d746efe7fe2b86e66630b7cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112241 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:22:41 compute-0 podman[167995]: 2026-02-02 11:22:41.565182261 +0000 UTC m=+0.025314680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:22:41 compute-0 podman[167995]: 2026-02-02 11:22:41.663644618 +0000 UTC m=+0.123777027 container init 8339eb3e35c99020561b5f533998ebae9bb7adfa075de7f8c5c2b7bc6a00e661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:22:41 compute-0 podman[167995]: 2026-02-02 11:22:41.669035703 +0000 UTC m=+0.129168092 container start 8339eb3e35c99020561b5f533998ebae9bb7adfa075de7f8c5c2b7bc6a00e661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:22:41 compute-0 podman[167995]: 2026-02-02 11:22:41.673000087 +0000 UTC m=+0.133132466 container attach 8339eb3e35c99020561b5f533998ebae9bb7adfa075de7f8c5c2b7bc6a00e661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_tu, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:22:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:22:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:41 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:22:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:22:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:22:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:22:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:22:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:41.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:42 compute-0 sad_tu[168011]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:22:42 compute-0 sad_tu[168011]: --> All data devices are unavailable
Feb 02 11:22:42 compute-0 systemd[1]: libpod-8339eb3e35c99020561b5f533998ebae9bb7adfa075de7f8c5c2b7bc6a00e661.scope: Deactivated successfully.
Feb 02 11:22:42 compute-0 podman[167995]: 2026-02-02 11:22:42.041669679 +0000 UTC m=+0.501802058 container died 8339eb3e35c99020561b5f533998ebae9bb7adfa075de7f8c5c2b7bc6a00e661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:22:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4038808384eed9ec7f430d5001ee5d851d42d83d746efe7fe2b86e66630b7cf-merged.mount: Deactivated successfully.
Feb 02 11:22:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:42.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:42 compute-0 podman[167995]: 2026-02-02 11:22:42.094227223 +0000 UTC m=+0.554359612 container remove 8339eb3e35c99020561b5f533998ebae9bb7adfa075de7f8c5c2b7bc6a00e661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_tu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 11:22:42 compute-0 systemd[1]: libpod-conmon-8339eb3e35c99020561b5f533998ebae9bb7adfa075de7f8c5c2b7bc6a00e661.scope: Deactivated successfully.
Feb 02 11:22:42 compute-0 sudo[168163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypuuonwgaayijrgbnfcnglyqadsytaei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031361.7455313-303-16790185531266/AnsiballZ_file.py'
Feb 02 11:22:42 compute-0 sudo[167813]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:42 compute-0 sudo[168163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:42 compute-0 sudo[168166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:22:42 compute-0 sudo[168166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:42 compute-0 sudo[168166]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:42 compute-0 sudo[168191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:22:42 compute-0 sudo[168191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:42 compute-0 python3.9[168165]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:42 compute-0 sudo[168163]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:42 compute-0 sudo[168329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:22:42 compute-0 sudo[168329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:42 compute-0 sudo[168329]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:42 compute-0 podman[168359]: 2026-02-02 11:22:42.611050614 +0000 UTC m=+0.051799454 container create e16bba59ad138f328fcde75ff9b29b3d929091fee5e6e506d7a4eef8bb199e15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:22:42 compute-0 systemd[1]: Started libpod-conmon-e16bba59ad138f328fcde75ff9b29b3d929091fee5e6e506d7a4eef8bb199e15.scope.
Feb 02 11:22:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:22:42 compute-0 podman[168359]: 2026-02-02 11:22:42.584627502 +0000 UTC m=+0.025376372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:22:42 compute-0 podman[168359]: 2026-02-02 11:22:42.681789142 +0000 UTC m=+0.122537992 container init e16bba59ad138f328fcde75ff9b29b3d929091fee5e6e506d7a4eef8bb199e15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:22:42 compute-0 podman[168359]: 2026-02-02 11:22:42.689589466 +0000 UTC m=+0.130338306 container start e16bba59ad138f328fcde75ff9b29b3d929091fee5e6e506d7a4eef8bb199e15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_elion, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Feb 02 11:22:42 compute-0 magical_elion[168419]: 167 167
Feb 02 11:22:42 compute-0 systemd[1]: libpod-e16bba59ad138f328fcde75ff9b29b3d929091fee5e6e506d7a4eef8bb199e15.scope: Deactivated successfully.
Feb 02 11:22:42 compute-0 podman[168359]: 2026-02-02 11:22:42.698011409 +0000 UTC m=+0.138760269 container attach e16bba59ad138f328fcde75ff9b29b3d929091fee5e6e506d7a4eef8bb199e15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:22:42 compute-0 podman[168359]: 2026-02-02 11:22:42.699046489 +0000 UTC m=+0.139795319 container died e16bba59ad138f328fcde75ff9b29b3d929091fee5e6e506d7a4eef8bb199e15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_elion, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:22:42 compute-0 sudo[168457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsruajlhmczblwfyhbtkirncgxyfysqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031362.478256-303-129104088310310/AnsiballZ_file.py'
Feb 02 11:22:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c312f2f7966580505c030d8a95f8aa535ce17451c5ec3f1c81b3186d73b64df-merged.mount: Deactivated successfully.
Feb 02 11:22:42 compute-0 sudo[168457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:42 compute-0 podman[168359]: 2026-02-02 11:22:42.752986603 +0000 UTC m=+0.193735443 container remove e16bba59ad138f328fcde75ff9b29b3d929091fee5e6e506d7a4eef8bb199e15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:22:42 compute-0 systemd[1]: libpod-conmon-e16bba59ad138f328fcde75ff9b29b3d929091fee5e6e506d7a4eef8bb199e15.scope: Deactivated successfully.
Feb 02 11:22:42 compute-0 ceph-mon[74676]: pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:22:42 compute-0 podman[168474]: 2026-02-02 11:22:42.887435816 +0000 UTC m=+0.041651461 container create 4c7cb4b083e655dd1a34250303c3bd6880e3d05937cade3f545a2cc34e918a56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_banzai, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:22:42 compute-0 python3.9[168465]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:42 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:42 compute-0 systemd[1]: Started libpod-conmon-4c7cb4b083e655dd1a34250303c3bd6880e3d05937cade3f545a2cc34e918a56.scope.
Feb 02 11:22:42 compute-0 sudo[168457]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cdca185accb7b1f585aec972141b6ac0c1f6810f07a167ecd3f7610ff44d2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cdca185accb7b1f585aec972141b6ac0c1f6810f07a167ecd3f7610ff44d2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cdca185accb7b1f585aec972141b6ac0c1f6810f07a167ecd3f7610ff44d2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cdca185accb7b1f585aec972141b6ac0c1f6810f07a167ecd3f7610ff44d2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:42 compute-0 podman[168474]: 2026-02-02 11:22:42.872951919 +0000 UTC m=+0.027167584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:22:42 compute-0 podman[168474]: 2026-02-02 11:22:42.976487372 +0000 UTC m=+0.130703037 container init 4c7cb4b083e655dd1a34250303c3bd6880e3d05937cade3f545a2cc34e918a56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:22:42 compute-0 podman[168474]: 2026-02-02 11:22:42.98195151 +0000 UTC m=+0.136167155 container start 4c7cb4b083e655dd1a34250303c3bd6880e3d05937cade3f545a2cc34e918a56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_banzai, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:22:42 compute-0 podman[168474]: 2026-02-02 11:22:42.986058848 +0000 UTC m=+0.140274513 container attach 4c7cb4b083e655dd1a34250303c3bd6880e3d05937cade3f545a2cc34e918a56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_banzai, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 11:22:43 compute-0 sudo[168648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brgakbjwturaziqghvamdjoxxijksade ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031363.033508-303-278022709339399/AnsiballZ_file.py'
Feb 02 11:22:43 compute-0 sudo[168648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:43 compute-0 awesome_banzai[168490]: {
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:     "1": [
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:         {
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "devices": [
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "/dev/loop3"
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             ],
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "lv_name": "ceph_lv0",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "lv_size": "21470642176",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "name": "ceph_lv0",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "tags": {
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.cluster_name": "ceph",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.crush_device_class": "",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.encrypted": "0",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.osd_id": "1",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.type": "block",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.vdo": "0",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:                 "ceph.with_tpm": "0"
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             },
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "type": "block",
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:             "vg_name": "ceph_vg0"
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:         }
Feb 02 11:22:43 compute-0 awesome_banzai[168490]:     ]
Feb 02 11:22:43 compute-0 awesome_banzai[168490]: }
Feb 02 11:22:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:43 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:43 compute-0 systemd[1]: libpod-4c7cb4b083e655dd1a34250303c3bd6880e3d05937cade3f545a2cc34e918a56.scope: Deactivated successfully.
Feb 02 11:22:43 compute-0 podman[168474]: 2026-02-02 11:22:43.294809523 +0000 UTC m=+0.449025178 container died 4c7cb4b083e655dd1a34250303c3bd6880e3d05937cade3f545a2cc34e918a56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 11:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-79cdca185accb7b1f585aec972141b6ac0c1f6810f07a167ecd3f7610ff44d2b-merged.mount: Deactivated successfully.
Feb 02 11:22:43 compute-0 podman[168474]: 2026-02-02 11:22:43.344352901 +0000 UTC m=+0.498568546 container remove 4c7cb4b083e655dd1a34250303c3bd6880e3d05937cade3f545a2cc34e918a56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:22:43 compute-0 systemd[1]: libpod-conmon-4c7cb4b083e655dd1a34250303c3bd6880e3d05937cade3f545a2cc34e918a56.scope: Deactivated successfully.
Feb 02 11:22:43 compute-0 sudo[168191]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:43 compute-0 sudo[168662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:22:43 compute-0 sudo[168662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:43 compute-0 sudo[168662]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:43 compute-0 python3.9[168650]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:43 compute-0 sudo[168687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:22:43 compute-0 sudo[168687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:43 compute-0 sudo[168648]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:22:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:43 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:43 compute-0 sudo[168911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erwlptwsqraaizvnxwbseueyafhujviu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031363.6135676-303-237251714280682/AnsiballZ_file.py'
Feb 02 11:22:43 compute-0 sudo[168911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:43 compute-0 podman[168894]: 2026-02-02 11:22:43.855350153 +0000 UTC m=+0.039414026 container create b52dcb3c7c7fbf7e98bfeb4ea44c637d5bd95067fb9179786534ec74138df7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:22:43 compute-0 systemd[1]: Started libpod-conmon-b52dcb3c7c7fbf7e98bfeb4ea44c637d5bd95067fb9179786534ec74138df7a9.scope.
Feb 02 11:22:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:22:43 compute-0 podman[168894]: 2026-02-02 11:22:43.838493047 +0000 UTC m=+0.022556940 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:22:43 compute-0 podman[168894]: 2026-02-02 11:22:43.939441636 +0000 UTC m=+0.123505529 container init b52dcb3c7c7fbf7e98bfeb4ea44c637d5bd95067fb9179786534ec74138df7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:22:43 compute-0 podman[168894]: 2026-02-02 11:22:43.945045147 +0000 UTC m=+0.129109030 container start b52dcb3c7c7fbf7e98bfeb4ea44c637d5bd95067fb9179786534ec74138df7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:22:43 compute-0 podman[168894]: 2026-02-02 11:22:43.948341582 +0000 UTC m=+0.132405455 container attach b52dcb3c7c7fbf7e98bfeb4ea44c637d5bd95067fb9179786534ec74138df7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:22:43 compute-0 compassionate_elion[168924]: 167 167
Feb 02 11:22:43 compute-0 systemd[1]: libpod-b52dcb3c7c7fbf7e98bfeb4ea44c637d5bd95067fb9179786534ec74138df7a9.scope: Deactivated successfully.
Feb 02 11:22:43 compute-0 podman[168894]: 2026-02-02 11:22:43.951145773 +0000 UTC m=+0.135209646 container died b52dcb3c7c7fbf7e98bfeb4ea44c637d5bd95067fb9179786534ec74138df7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elion, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:22:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:43.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc6a2632971324136f86a5731306ae5b2b836bf312eeb460ddf0ac333107f342-merged.mount: Deactivated successfully.
Feb 02 11:22:43 compute-0 podman[168894]: 2026-02-02 11:22:43.985629877 +0000 UTC m=+0.169693750 container remove b52dcb3c7c7fbf7e98bfeb4ea44c637d5bd95067fb9179786534ec74138df7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:22:43 compute-0 systemd[1]: libpod-conmon-b52dcb3c7c7fbf7e98bfeb4ea44c637d5bd95067fb9179786534ec74138df7a9.scope: Deactivated successfully.
Feb 02 11:22:44 compute-0 python3.9[168919]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:44 compute-0 sudo[168911]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:44 compute-0 podman[168948]: 2026-02-02 11:22:44.131654454 +0000 UTC m=+0.051612438 container create 058694bbba4a0c7e1a098d9f06a0769a46f2fcbdbad70700a64d5d9f0bb3a62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:22:44 compute-0 systemd[1]: Started libpod-conmon-058694bbba4a0c7e1a098d9f06a0769a46f2fcbdbad70700a64d5d9f0bb3a62c.scope.
Feb 02 11:22:44 compute-0 podman[168948]: 2026-02-02 11:22:44.110817123 +0000 UTC m=+0.030775137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:22:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90de630e2e2413671184c88a4e67b5dccd794d82b455264c8c36daddae794397/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90de630e2e2413671184c88a4e67b5dccd794d82b455264c8c36daddae794397/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90de630e2e2413671184c88a4e67b5dccd794d82b455264c8c36daddae794397/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90de630e2e2413671184c88a4e67b5dccd794d82b455264c8c36daddae794397/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:22:44 compute-0 podman[168948]: 2026-02-02 11:22:44.233934661 +0000 UTC m=+0.153892665 container init 058694bbba4a0c7e1a098d9f06a0769a46f2fcbdbad70700a64d5d9f0bb3a62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:22:44 compute-0 podman[168948]: 2026-02-02 11:22:44.246711139 +0000 UTC m=+0.166669123 container start 058694bbba4a0c7e1a098d9f06a0769a46f2fcbdbad70700a64d5d9f0bb3a62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb 02 11:22:44 compute-0 podman[168948]: 2026-02-02 11:22:44.260675251 +0000 UTC m=+0.180633265 container attach 058694bbba4a0c7e1a098d9f06a0769a46f2fcbdbad70700a64d5d9f0bb3a62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:22:44 compute-0 sudo[169120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlsngoivqvetkbaxiytdmifjvhkqsrfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031364.183438-303-280214033638168/AnsiballZ_file.py'
Feb 02 11:22:44 compute-0 sudo[169120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:22:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:22:44 compute-0 python3.9[169127]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:44 compute-0 sudo[169120]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:44 compute-0 lvm[169268]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:22:44 compute-0 lvm[169268]: VG ceph_vg0 finished
Feb 02 11:22:44 compute-0 ceph-mon[74676]: pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:22:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:22:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:44 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:44 compute-0 boring_bardeen[169012]: {}
Feb 02 11:22:44 compute-0 systemd[1]: libpod-058694bbba4a0c7e1a098d9f06a0769a46f2fcbdbad70700a64d5d9f0bb3a62c.scope: Deactivated successfully.
Feb 02 11:22:44 compute-0 systemd[1]: libpod-058694bbba4a0c7e1a098d9f06a0769a46f2fcbdbad70700a64d5d9f0bb3a62c.scope: Consumed 1.057s CPU time.
Feb 02 11:22:44 compute-0 podman[168948]: 2026-02-02 11:22:44.976174944 +0000 UTC m=+0.896132928 container died 058694bbba4a0c7e1a098d9f06a0769a46f2fcbdbad70700a64d5d9f0bb3a62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-90de630e2e2413671184c88a4e67b5dccd794d82b455264c8c36daddae794397-merged.mount: Deactivated successfully.
Feb 02 11:22:45 compute-0 podman[168948]: 2026-02-02 11:22:45.02564893 +0000 UTC m=+0.945606914 container remove 058694bbba4a0c7e1a098d9f06a0769a46f2fcbdbad70700a64d5d9f0bb3a62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bardeen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:22:45 compute-0 sudo[169356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvkkxfoucybybadkgdgfmohzogktgjpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031364.7711139-303-130797915784098/AnsiballZ_file.py'
Feb 02 11:22:45 compute-0 sudo[169356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:45 compute-0 systemd[1]: libpod-conmon-058694bbba4a0c7e1a098d9f06a0769a46f2fcbdbad70700a64d5d9f0bb3a62c.scope: Deactivated successfully.
Feb 02 11:22:45 compute-0 sudo[168687]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:22:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:22:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:22:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:22:45 compute-0 sudo[169359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:22:45 compute-0 sudo[169359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:22:45 compute-0 sudo[169359]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:45 compute-0 python3.9[169358]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:45 compute-0 sudo[169356]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:45 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:45 compute-0 sudo[169534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msttvtntegkoiezsensckjrejxztgnyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031365.3810441-303-9334149662621/AnsiballZ_file.py'
Feb 02 11:22:45 compute-0 sudo[169534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Feb 02 11:22:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:45 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:45 compute-0 python3.9[169536]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:45 compute-0 sudo[169534]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:45.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:46.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:22:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:22:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:46 compute-0 sudo[169686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbupnmhynhllobrloisctarnqityorle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031365.9641414-453-85505520538047/AnsiballZ_file.py'
Feb 02 11:22:46 compute-0 sudo[169686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:46 compute-0 python3.9[169688]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:46 compute-0 sudo[169686]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:46 compute-0 sudo[169838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epefqredycspfslbfpczlndarvvjkzsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031366.5323336-453-105568699106258/AnsiballZ_file.py'
Feb 02 11:22:46 compute-0 sudo[169838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:46 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:46 compute-0 python3.9[169840]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:46 compute-0 sudo[169838]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:46] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Feb 02 11:22:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:46] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Feb 02 11:22:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:22:47.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:22:47 compute-0 ceph-mon[74676]: pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Feb 02 11:22:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:47 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:47 compute-0 sudo[169991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nduwnvcqfsxupfveecgnqkngjezbrzyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031367.093715-453-25116934970800/AnsiballZ_file.py'
Feb 02 11:22:47 compute-0 sudo[169991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:47 compute-0 python3.9[169993]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:47 compute-0 sudo[169991]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Feb 02 11:22:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:47 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:47 compute-0 sudo[170144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcmgmcuebjemwvyvtoduwndwszkagrrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031367.6434608-453-232763892487897/AnsiballZ_file.py'
Feb 02 11:22:47 compute-0 sudo[170144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:47.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:48 compute-0 python3.9[170146]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:48 compute-0 sudo[170144]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:22:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:48.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:22:48 compute-0 podman[170177]: 2026-02-02 11:22:48.324556815 +0000 UTC m=+0.107617911 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 02 11:22:48 compute-0 sudo[170322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgzgawwlurxbsaxglhxqstalzqacpmlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031368.2154236-453-64082595897288/AnsiballZ_file.py'
Feb 02 11:22:48 compute-0 sudo[170322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:48 compute-0 python3.9[170324]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:48 compute-0 sudo[170322]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:48 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:49 compute-0 ceph-mon[74676]: pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Feb 02 11:22:49 compute-0 sudo[170475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvztxfodwkjuokeqzbefenfqpjdkulse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031368.8451111-453-6871061058554/AnsiballZ_file.py'
Feb 02 11:22:49 compute-0 sudo[170475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:49 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:49 compute-0 python3.9[170477]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Feb 02 11:22:49 compute-0 sudo[170475]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:49 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:49.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:50 compute-0 sudo[170638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cotrvpnzwflkgcqezkudjhfowvyxmmds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031369.8220894-453-266476421875152/AnsiballZ_file.py'
Feb 02 11:22:50 compute-0 sudo[170638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:50 compute-0 podman[170602]: 2026-02-02 11:22:50.080751602 +0000 UTC m=+0.049300692 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:22:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:22:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:50.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:22:50 compute-0 python3.9[170646]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:22:50 compute-0 sudo[170638]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:50 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:50 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:22:51 compute-0 ceph-mon[74676]: pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Feb 02 11:22:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:51 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:51 compute-0 sudo[170800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzswknxiyvkgwznsfiqbtnqhenlympud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031371.053276-606-44273623333105/AnsiballZ_command.py'
Feb 02 11:22:51 compute-0 sudo[170800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:51 compute-0 python3.9[170802]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:22:51 compute-0 sudo[170800]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 85 B/s wr, 171 op/s
Feb 02 11:22:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:51 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:51.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:52.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:52 compute-0 python3.9[170955]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 11:22:52 compute-0 sudo[171105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qikijxzsdgpyhgdqifcktkegfrvefydz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031372.568598-660-48718447759021/AnsiballZ_systemd_service.py'
Feb 02 11:22:52 compute-0 sudo[171105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:52 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:53 compute-0 python3.9[171107]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 11:22:53 compute-0 systemd[1]: Reloading.
Feb 02 11:22:53 compute-0 ceph-mon[74676]: pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 85 B/s wr, 171 op/s
Feb 02 11:22:53 compute-0 systemd-sysv-generator[171136]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:22:53 compute-0 systemd-rc-local-generator[171131]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:22:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:53 compute-0 sudo[171105]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 85 B/s wr, 171 op/s
Feb 02 11:22:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:53 compute-0 sudo[171295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fllasxsygwuzgmijxgazrccssrihkbzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031373.5810125-684-106283212472107/AnsiballZ_command.py'
Feb 02 11:22:53 compute-0 sudo[171295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:22:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:22:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:53.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:53 compute-0 python3.9[171297]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:22:54 compute-0 sudo[171295]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:54.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:54 compute-0 sudo[171448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptplpisrydoyladndeammlesjogtbfni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031374.1270323-684-146072639697827/AnsiballZ_command.py'
Feb 02 11:22:54 compute-0 sudo[171448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:54 compute-0 python3.9[171450]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:22:54 compute-0 sudo[171448]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:54 compute-0 sudo[171602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itbvpbimgqemjwsrnhfrcumahxqyxjhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031374.655152-684-272177114506747/AnsiballZ_command.py'
Feb 02 11:22:54 compute-0 sudo[171602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:54 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:55 compute-0 python3.9[171604]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:22:55 compute-0 sudo[171602]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:55 compute-0 ceph-mon[74676]: pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 85 B/s wr, 171 op/s
Feb 02 11:22:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:55 compute-0 sudo[171755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lskbcuduaixzqmrrevjmrjefjrbqrxmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031375.204064-684-173483148616512/AnsiballZ_command.py'
Feb 02 11:22:55 compute-0 sudo[171755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:55 compute-0 python3.9[171757]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:22:55 compute-0 sudo[171755]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 938 B/s wr, 173 op/s
Feb 02 11:22:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:55.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:56 compute-0 sudo[171909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iajykiilhpwqqnlpkljdtjevoipigyfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031375.7877169-684-2320118599445/AnsiballZ_command.py'
Feb 02 11:22:56 compute-0 sudo[171909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:22:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:56.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:22:56 compute-0 python3.9[171911]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:22:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:22:56 compute-0 sudo[171909]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:56 compute-0 sudo[172062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlzixbvfdlwxclzwabxehsoyioqnsbgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031376.3375583-684-267999138513972/AnsiballZ_command.py'
Feb 02 11:22:56 compute-0 sudo[172062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:56 compute-0 python3.9[172064]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:22:56 compute-0 sudo[172062]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:56 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:56 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:22:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:56] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Feb 02 11:22:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:22:56] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Feb 02 11:22:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:22:57.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:22:57 compute-0 sudo[172216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olghbbzfehhhlgoxbpbisfgeiwclyklc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031376.8594315-684-52552482438053/AnsiballZ_command.py'
Feb 02 11:22:57 compute-0 sudo[172216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:57 compute-0 ceph-mon[74676]: pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 938 B/s wr, 173 op/s
Feb 02 11:22:57 compute-0 python3.9[172218]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:22:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:57 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:57 compute-0 sudo[172216]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 938 B/s wr, 59 op/s
Feb 02 11:22:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:57 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:57.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:22:58.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:22:58 compute-0 sudo[172370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ladknzbtxhnexutgybrkwcmyifgofsfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031377.826729-846-203163974073628/AnsiballZ_getent.py'
Feb 02 11:22:58 compute-0 sudo[172370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:58 compute-0 python3.9[172372]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb 02 11:22:58 compute-0 sudo[172370]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:58 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:58 compute-0 sudo[172524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qizzyjwzaotcrgrasszbosqffzzmiady ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031378.5783005-870-27730108191204/AnsiballZ_group.py'
Feb 02 11:22:58 compute-0 sudo[172524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:59 compute-0 python3.9[172526]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 11:22:59 compute-0 ceph-mon[74676]: pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 938 B/s wr, 59 op/s
Feb 02 11:22:59 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:22:59 compute-0 groupadd[172527]: group added to /etc/group: name=libvirt, GID=42473
Feb 02 11:22:59 compute-0 groupadd[172527]: group added to /etc/gshadow: name=libvirt
Feb 02 11:22:59 compute-0 groupadd[172527]: new group: name=libvirt, GID=42473
Feb 02 11:22:59 compute-0 sudo[172524]: pam_unix(sudo:session): session closed for user root
Feb 02 11:22:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:59 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:22:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:22:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:22:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:22:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:22:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:22:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:22:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:22:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 938 B/s wr, 59 op/s
Feb 02 11:22:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:22:59 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:22:59 compute-0 sudo[172684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbgammtnzvitonfkkbjtoxrjoiucdxur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031379.3809729-894-235892761191127/AnsiballZ_user.py'
Feb 02 11:22:59 compute-0 sudo[172684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:22:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:22:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:22:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:22:59.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:00 compute-0 python3.9[172686]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 02 11:23:00 compute-0 useradd[172688]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Feb 02 11:23:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:00.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:00 compute-0 sudo[172684]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:23:00 compute-0 sudo[172844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toutfhgujyasfaqugzvkfvqrylrzmcky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031380.5152156-927-29035676860941/AnsiballZ_setup.py'
Feb 02 11:23:00 compute-0 sudo[172844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:23:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:00 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:01 compute-0 python3.9[172846]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:23:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:01 compute-0 ceph-mon[74676]: pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 938 B/s wr, 59 op/s
Feb 02 11:23:01 compute-0 sudo[172844]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:01 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:01 compute-0 sudo[172930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxircqnkbqkspfcqcbrrwcuvajvvgacz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031380.5152156-927-29035676860941/AnsiballZ_dnf.py'
Feb 02 11:23:01 compute-0 sudo[172930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:23:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1023 B/s wr, 60 op/s
Feb 02 11:23:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:01 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:01 compute-0 python3.9[172932]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:23:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:23:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:23:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:02.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:02 compute-0 ceph-mon[74676]: pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1023 B/s wr, 60 op/s
Feb 02 11:23:02 compute-0 sudo[172934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:23:02 compute-0 sudo[172934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:02 compute-0 sudo[172934]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:02 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:03 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112303 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:23:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:23:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:03 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:03.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:04.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:04 compute-0 ceph-mon[74676]: pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:23:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:04 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:05 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:23:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:05 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:05.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:06.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:06 compute-0 ceph-mon[74676]: pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:23:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:06 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:06] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:23:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:06] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:23:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:23:07.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:23:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:07 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:23:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:07 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:07.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:08.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:08 compute-0 ceph-mon[74676]: pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:23:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:08 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:09 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:23:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:09 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:09.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:10.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:10 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:10 compute-0 ceph-mon[74676]: pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:23:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:11 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:23:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:11 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:11.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:12.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:12 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:12 compute-0 ceph-mon[74676]: pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:23:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:13 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:13 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:13.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:14.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:23:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:23:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:14 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:15 compute-0 ceph-mon[74676]: pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:23:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:15 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:15 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:15.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:16.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:16 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:16] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:23:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:16] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:23:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:23:17.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:23:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:23:17.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:23:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:23:17.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:23:17 compute-0 ceph-mon[74676]: pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:17 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:17 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360002870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:18.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:18.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:18 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:19 compute-0 ceph-mon[74676]: pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:19 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:19 compute-0 podman[173165]: 2026-02-02 11:23:19.323457076 +0000 UTC m=+0.105727127 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 02 11:23:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:19 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360002870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:20.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:20.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:20 compute-0 ceph-mon[74676]: pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:20 compute-0 podman[173191]: 2026-02-02 11:23:20.262830431 +0000 UTC m=+0.047504740 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 02 11:23:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:20 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:21 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:23:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:21 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:22.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:22.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:23:22.655 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:23:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:23:22.655 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:23:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:23:22.655 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:23:22 compute-0 sudo[173213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:23:22 compute-0 sudo[173213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:22 compute-0 sudo[173213]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:22 compute-0 ceph-mon[74676]: pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:23:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:22 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360002870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:23 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:23 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:24.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:24.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:24 compute-0 ceph-mon[74676]: pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:24 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360002870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:26.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:26.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:26 compute-0 ceph-mon[74676]: pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:26 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:26] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:23:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:26] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:23:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:23:27.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:23:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:27 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:27 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360002870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:28.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:23:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:28.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:23:28 compute-0 ceph-mon[74676]: pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:28 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:29 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:23:29
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.log', '.rgw.root', '.nfs', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', 'images']
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:23:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:23:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:23:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:23:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:29 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:23:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:30.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:30.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:30 compute-0 ceph-mon[74676]: pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:30 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:31 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:23:31 compute-0 kernel: SELinux:  Converting 2785 SID table entries...
Feb 02 11:23:31 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 11:23:31 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 11:23:31 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 11:23:31 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 11:23:31 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 11:23:31 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 11:23:31 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 11:23:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:31 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:32.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:23:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:32.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:23:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:32 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:32 compute-0 ceph-mon[74676]: pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:23:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:33 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:33 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:34.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:34.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:34 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:35 compute-0 ceph-mon[74676]: pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:35 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388004ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:35 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:23:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:36.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:23:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:36.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:36 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:36] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:23:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:36] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:23:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:23:37.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:23:37 compute-0 ceph-mon[74676]: pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:38.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:23:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:38.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:23:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:38 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:39 compute-0 ceph-mon[74676]: pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:39 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:39 compute-0 sshd-session[173265]: Invalid user mapr from 80.94.92.186 port 49880
Feb 02 11:23:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:39 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:40.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:40 compute-0 sshd-session[173265]: Connection closed by invalid user mapr 80.94.92.186 port 49880 [preauth]
Feb 02 11:23:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:40.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:40 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:41 compute-0 ceph-mon[74676]: pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:41 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:23:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:41 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:42.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:23:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:42.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:23:42 compute-0 sudo[173270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:23:42 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Feb 02 11:23:42 compute-0 sudo[173270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:42 compute-0 sudo[173270]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:42 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:43 compute-0 ceph-mon[74676]: pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:23:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:43 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:43 compute-0 kernel: SELinux:  Converting 2785 SID table entries...
Feb 02 11:23:43 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 11:23:43 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 11:23:43 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 11:23:43 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 11:23:43 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 11:23:43 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 11:23:43 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 11:23:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:43 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:44.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:44.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112344 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:23:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:23:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:23:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:44 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:45 compute-0 ceph-mon[74676]: pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:23:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:45 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:45 compute-0 sudo[173305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:23:45 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb 02 11:23:45 compute-0 sudo[173305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:45 compute-0 sudo[173305]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:45 compute-0 sudo[173330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:23:45 compute-0 sudo[173330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:45 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:45 compute-0 sudo[173330]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:23:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:23:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:23:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:23:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:23:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:23:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:23:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:23:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:46.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:23:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:23:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:23:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:23:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:23:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:23:46 compute-0 sudo[173386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:23:46 compute-0 sudo[173386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:46 compute-0 sudo[173386]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:46 compute-0 sudo[173411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:23:46 compute-0 sudo[173411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:46.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:23:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:23:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:23:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:23:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:23:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:23:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:23:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:46 compute-0 podman[173476]: 2026-02-02 11:23:46.525952566 +0000 UTC m=+0.046491029 container create 1b86984411a2f25e594772e9e7d9eb760a62ce5b5946d0303571adaa2d68118a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_poitras, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 11:23:46 compute-0 systemd[1]: Started libpod-conmon-1b86984411a2f25e594772e9e7d9eb760a62ce5b5946d0303571adaa2d68118a.scope.
Feb 02 11:23:46 compute-0 podman[173476]: 2026-02-02 11:23:46.50415317 +0000 UTC m=+0.024691653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:23:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:23:46 compute-0 podman[173476]: 2026-02-02 11:23:46.621971601 +0000 UTC m=+0.142510114 container init 1b86984411a2f25e594772e9e7d9eb760a62ce5b5946d0303571adaa2d68118a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_poitras, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:23:46 compute-0 podman[173476]: 2026-02-02 11:23:46.630089282 +0000 UTC m=+0.150627745 container start 1b86984411a2f25e594772e9e7d9eb760a62ce5b5946d0303571adaa2d68118a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_poitras, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:23:46 compute-0 keen_poitras[173493]: 167 167
Feb 02 11:23:46 compute-0 systemd[1]: libpod-1b86984411a2f25e594772e9e7d9eb760a62ce5b5946d0303571adaa2d68118a.scope: Deactivated successfully.
Feb 02 11:23:46 compute-0 podman[173476]: 2026-02-02 11:23:46.6381417 +0000 UTC m=+0.158680303 container attach 1b86984411a2f25e594772e9e7d9eb760a62ce5b5946d0303571adaa2d68118a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:23:46 compute-0 podman[173476]: 2026-02-02 11:23:46.639379487 +0000 UTC m=+0.159917980 container died 1b86984411a2f25e594772e9e7d9eb760a62ce5b5946d0303571adaa2d68118a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4645907a6b532093025820d2fc82780507c0b9c6f7bb02ae606336844092c198-merged.mount: Deactivated successfully.
Feb 02 11:23:46 compute-0 podman[173476]: 2026-02-02 11:23:46.68468215 +0000 UTC m=+0.205220613 container remove 1b86984411a2f25e594772e9e7d9eb760a62ce5b5946d0303571adaa2d68118a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_poitras, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:23:46 compute-0 systemd[1]: libpod-conmon-1b86984411a2f25e594772e9e7d9eb760a62ce5b5946d0303571adaa2d68118a.scope: Deactivated successfully.
Feb 02 11:23:46 compute-0 podman[173517]: 2026-02-02 11:23:46.822665269 +0000 UTC m=+0.051748985 container create 0a3ed0dbd53e2634b628db4f96c49dc30250e8260b971d6379eee252e807c3bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:23:46 compute-0 systemd[1]: Started libpod-conmon-0a3ed0dbd53e2634b628db4f96c49dc30250e8260b971d6379eee252e807c3bd.scope.
Feb 02 11:23:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ed29e83ec18eaf8279331e8b0b1fed3ef332395fbf32b139b546480c298cd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ed29e83ec18eaf8279331e8b0b1fed3ef332395fbf32b139b546480c298cd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ed29e83ec18eaf8279331e8b0b1fed3ef332395fbf32b139b546480c298cd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ed29e83ec18eaf8279331e8b0b1fed3ef332395fbf32b139b546480c298cd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ed29e83ec18eaf8279331e8b0b1fed3ef332395fbf32b139b546480c298cd6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:46 compute-0 podman[173517]: 2026-02-02 11:23:46.793407132 +0000 UTC m=+0.022490878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:23:46 compute-0 podman[173517]: 2026-02-02 11:23:46.911718658 +0000 UTC m=+0.140802394 container init 0a3ed0dbd53e2634b628db4f96c49dc30250e8260b971d6379eee252e807c3bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:23:46 compute-0 podman[173517]: 2026-02-02 11:23:46.917306163 +0000 UTC m=+0.146389879 container start 0a3ed0dbd53e2634b628db4f96c49dc30250e8260b971d6379eee252e807c3bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:23:46 compute-0 podman[173517]: 2026-02-02 11:23:46.921174798 +0000 UTC m=+0.150258534 container attach 0a3ed0dbd53e2634b628db4f96c49dc30250e8260b971d6379eee252e807c3bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:23:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:46 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:46] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Feb 02 11:23:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:46] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Feb 02 11:23:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:23:47.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:23:47 compute-0 reverent_dirac[173534]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:23:47 compute-0 reverent_dirac[173534]: --> All data devices are unavailable
Feb 02 11:23:47 compute-0 ceph-mon[74676]: pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:47 compute-0 systemd[1]: libpod-0a3ed0dbd53e2634b628db4f96c49dc30250e8260b971d6379eee252e807c3bd.scope: Deactivated successfully.
Feb 02 11:23:47 compute-0 podman[173517]: 2026-02-02 11:23:47.258583378 +0000 UTC m=+0.487667114 container died 0a3ed0dbd53e2634b628db4f96c49dc30250e8260b971d6379eee252e807c3bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Feb 02 11:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8ed29e83ec18eaf8279331e8b0b1fed3ef332395fbf32b139b546480c298cd6-merged.mount: Deactivated successfully.
Feb 02 11:23:47 compute-0 podman[173517]: 2026-02-02 11:23:47.300905752 +0000 UTC m=+0.529989468 container remove 0a3ed0dbd53e2634b628db4f96c49dc30250e8260b971d6379eee252e807c3bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:23:47 compute-0 systemd[1]: libpod-conmon-0a3ed0dbd53e2634b628db4f96c49dc30250e8260b971d6379eee252e807c3bd.scope: Deactivated successfully.
Feb 02 11:23:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:47 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378002770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:47 compute-0 sudo[173411]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:47 compute-0 sudo[173563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:23:47 compute-0 sudo[173563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:47 compute-0 sudo[173563]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:47 compute-0 sudo[173588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:23:47 compute-0 sudo[173588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:47 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:47 compute-0 podman[173655]: 2026-02-02 11:23:47.828045173 +0000 UTC m=+0.034917516 container create f8fa5d05cb1628a913861a6e4d72aba7bc521b72a9792bcd69f46da1ce9574fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:23:47 compute-0 systemd[1]: Started libpod-conmon-f8fa5d05cb1628a913861a6e4d72aba7bc521b72a9792bcd69f46da1ce9574fd.scope.
Feb 02 11:23:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:23:47 compute-0 podman[173655]: 2026-02-02 11:23:47.895342877 +0000 UTC m=+0.102215250 container init f8fa5d05cb1628a913861a6e4d72aba7bc521b72a9792bcd69f46da1ce9574fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:23:47 compute-0 podman[173655]: 2026-02-02 11:23:47.902361135 +0000 UTC m=+0.109233478 container start f8fa5d05cb1628a913861a6e4d72aba7bc521b72a9792bcd69f46da1ce9574fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:23:47 compute-0 podman[173655]: 2026-02-02 11:23:47.90624217 +0000 UTC m=+0.113114603 container attach f8fa5d05cb1628a913861a6e4d72aba7bc521b72a9792bcd69f46da1ce9574fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:23:47 compute-0 laughing_allen[173671]: 167 167
Feb 02 11:23:47 compute-0 systemd[1]: libpod-f8fa5d05cb1628a913861a6e4d72aba7bc521b72a9792bcd69f46da1ce9574fd.scope: Deactivated successfully.
Feb 02 11:23:47 compute-0 podman[173655]: 2026-02-02 11:23:47.810994308 +0000 UTC m=+0.017866671 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:23:47 compute-0 podman[173655]: 2026-02-02 11:23:47.908791236 +0000 UTC m=+0.115663599 container died f8fa5d05cb1628a913861a6e4d72aba7bc521b72a9792bcd69f46da1ce9574fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_allen, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4350d3c073cdb3c3f8d78b8724126b17c3a29d180488b930ac36f0b80e86b80f-merged.mount: Deactivated successfully.
Feb 02 11:23:47 compute-0 podman[173655]: 2026-02-02 11:23:47.954622624 +0000 UTC m=+0.161494967 container remove f8fa5d05cb1628a913861a6e4d72aba7bc521b72a9792bcd69f46da1ce9574fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:23:47 compute-0 systemd[1]: libpod-conmon-f8fa5d05cb1628a913861a6e4d72aba7bc521b72a9792bcd69f46da1ce9574fd.scope: Deactivated successfully.
Feb 02 11:23:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:23:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:48.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:23:48 compute-0 podman[173695]: 2026-02-02 11:23:48.103056183 +0000 UTC m=+0.041774329 container create baab89e89c15f2629b6d30f1c1491ea0e699d761864e7e33e745419df4e3ff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_panini, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:23:48 compute-0 systemd[1]: Started libpod-conmon-baab89e89c15f2629b6d30f1c1491ea0e699d761864e7e33e745419df4e3ff58.scope.
Feb 02 11:23:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac57cb58140c6cbf51208023940281094f92601856c509ddd0ca7a3db7c5666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac57cb58140c6cbf51208023940281094f92601856c509ddd0ca7a3db7c5666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac57cb58140c6cbf51208023940281094f92601856c509ddd0ca7a3db7c5666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac57cb58140c6cbf51208023940281094f92601856c509ddd0ca7a3db7c5666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:23:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:48.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:23:48 compute-0 podman[173695]: 2026-02-02 11:23:48.085690518 +0000 UTC m=+0.024408694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:23:48 compute-0 podman[173695]: 2026-02-02 11:23:48.182991711 +0000 UTC m=+0.121709877 container init baab89e89c15f2629b6d30f1c1491ea0e699d761864e7e33e745419df4e3ff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:23:48 compute-0 podman[173695]: 2026-02-02 11:23:48.187803984 +0000 UTC m=+0.126522140 container start baab89e89c15f2629b6d30f1c1491ea0e699d761864e7e33e745419df4e3ff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:23:48 compute-0 podman[173695]: 2026-02-02 11:23:48.193534914 +0000 UTC m=+0.132253080 container attach baab89e89c15f2629b6d30f1c1491ea0e699d761864e7e33e745419df4e3ff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_panini, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:23:48 compute-0 ceph-mon[74676]: pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:48 compute-0 gifted_panini[173711]: {
Feb 02 11:23:48 compute-0 gifted_panini[173711]:     "1": [
Feb 02 11:23:48 compute-0 gifted_panini[173711]:         {
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "devices": [
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "/dev/loop3"
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             ],
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "lv_name": "ceph_lv0",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "lv_size": "21470642176",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "name": "ceph_lv0",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "tags": {
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.cluster_name": "ceph",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.crush_device_class": "",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.encrypted": "0",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.osd_id": "1",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.type": "block",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.vdo": "0",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:                 "ceph.with_tpm": "0"
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             },
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "type": "block",
Feb 02 11:23:48 compute-0 gifted_panini[173711]:             "vg_name": "ceph_vg0"
Feb 02 11:23:48 compute-0 gifted_panini[173711]:         }
Feb 02 11:23:48 compute-0 gifted_panini[173711]:     ]
Feb 02 11:23:48 compute-0 gifted_panini[173711]: }
Feb 02 11:23:48 compute-0 systemd[1]: libpod-baab89e89c15f2629b6d30f1c1491ea0e699d761864e7e33e745419df4e3ff58.scope: Deactivated successfully.
Feb 02 11:23:48 compute-0 podman[173695]: 2026-02-02 11:23:48.507411255 +0000 UTC m=+0.446129401 container died baab89e89c15f2629b6d30f1c1491ea0e699d761864e7e33e745419df4e3ff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_panini, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bac57cb58140c6cbf51208023940281094f92601856c509ddd0ca7a3db7c5666-merged.mount: Deactivated successfully.
Feb 02 11:23:48 compute-0 podman[173695]: 2026-02-02 11:23:48.581710747 +0000 UTC m=+0.520428893 container remove baab89e89c15f2629b6d30f1c1491ea0e699d761864e7e33e745419df4e3ff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:23:48 compute-0 systemd[1]: libpod-conmon-baab89e89c15f2629b6d30f1c1491ea0e699d761864e7e33e745419df4e3ff58.scope: Deactivated successfully.
Feb 02 11:23:48 compute-0 sudo[173588]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:48 compute-0 sudo[173732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:23:48 compute-0 sudo[173732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:48 compute-0 sudo[173732]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:48 compute-0 sudo[173757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:23:48 compute-0 sudo[173757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:48 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:49 compute-0 podman[173826]: 2026-02-02 11:23:49.12407604 +0000 UTC m=+0.050984002 container create 1ed979812429b023b7521caef7b8db716eaf923ebffce3524f512bcbf2ea71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb 02 11:23:49 compute-0 systemd[1]: Started libpod-conmon-1ed979812429b023b7521caef7b8db716eaf923ebffce3524f512bcbf2ea71e5.scope.
Feb 02 11:23:49 compute-0 podman[173826]: 2026-02-02 11:23:49.093140193 +0000 UTC m=+0.020048185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:23:49 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:23:49 compute-0 podman[173826]: 2026-02-02 11:23:49.203977108 +0000 UTC m=+0.130885100 container init 1ed979812429b023b7521caef7b8db716eaf923ebffce3524f512bcbf2ea71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:23:49 compute-0 podman[173826]: 2026-02-02 11:23:49.209636476 +0000 UTC m=+0.136544438 container start 1ed979812429b023b7521caef7b8db716eaf923ebffce3524f512bcbf2ea71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_vaughan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:23:49 compute-0 podman[173826]: 2026-02-02 11:23:49.213218892 +0000 UTC m=+0.140126854 container attach 1ed979812429b023b7521caef7b8db716eaf923ebffce3524f512bcbf2ea71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_vaughan, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:23:49 compute-0 boring_vaughan[173843]: 167 167
Feb 02 11:23:49 compute-0 systemd[1]: libpod-1ed979812429b023b7521caef7b8db716eaf923ebffce3524f512bcbf2ea71e5.scope: Deactivated successfully.
Feb 02 11:23:49 compute-0 podman[173826]: 2026-02-02 11:23:49.215317634 +0000 UTC m=+0.142225596 container died 1ed979812429b023b7521caef7b8db716eaf923ebffce3524f512bcbf2ea71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_vaughan, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-03c7e56817f6a8a5620f9f76ffa859c816109598dac68a084bb1ac48a7fedb3d-merged.mount: Deactivated successfully.
Feb 02 11:23:49 compute-0 podman[173826]: 2026-02-02 11:23:49.249501107 +0000 UTC m=+0.176409069 container remove 1ed979812429b023b7521caef7b8db716eaf923ebffce3524f512bcbf2ea71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_vaughan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:23:49 compute-0 systemd[1]: libpod-conmon-1ed979812429b023b7521caef7b8db716eaf923ebffce3524f512bcbf2ea71e5.scope: Deactivated successfully.
Feb 02 11:23:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:49 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:49 compute-0 podman[173867]: 2026-02-02 11:23:49.38766204 +0000 UTC m=+0.045262001 container create ffd5cd1b2e9428c8affcc7a373f5a4cc385fca3f243cb13147cbfa71fee04f3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_curie, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:23:49 compute-0 systemd[1]: Started libpod-conmon-ffd5cd1b2e9428c8affcc7a373f5a4cc385fca3f243cb13147cbfa71fee04f3a.scope.
Feb 02 11:23:49 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e881d0a435d383c48675c35f929b20aa25c2cdafd09ba3d56500b6f13aa94b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e881d0a435d383c48675c35f929b20aa25c2cdafd09ba3d56500b6f13aa94b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e881d0a435d383c48675c35f929b20aa25c2cdafd09ba3d56500b6f13aa94b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e881d0a435d383c48675c35f929b20aa25c2cdafd09ba3d56500b6f13aa94b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:23:49 compute-0 podman[173867]: 2026-02-02 11:23:49.367259566 +0000 UTC m=+0.024859567 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:23:49 compute-0 podman[173867]: 2026-02-02 11:23:49.472275848 +0000 UTC m=+0.129875829 container init ffd5cd1b2e9428c8affcc7a373f5a4cc385fca3f243cb13147cbfa71fee04f3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_curie, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:23:49 compute-0 podman[173867]: 2026-02-02 11:23:49.477207764 +0000 UTC m=+0.134807725 container start ffd5cd1b2e9428c8affcc7a373f5a4cc385fca3f243cb13147cbfa71fee04f3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_curie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:23:49 compute-0 podman[173867]: 2026-02-02 11:23:49.482832421 +0000 UTC m=+0.140432382 container attach ffd5cd1b2e9428c8affcc7a373f5a4cc385fca3f243cb13147cbfa71fee04f3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Feb 02 11:23:49 compute-0 podman[173881]: 2026-02-02 11:23:49.506533583 +0000 UTC m=+0.085151434 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:23:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:49 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378002770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:50 compute-0 lvm[173984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:23:50 compute-0 lvm[173984]: VG ceph_vg0 finished
Feb 02 11:23:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:50.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:50 compute-0 inspiring_curie[173885]: {}
Feb 02 11:23:50 compute-0 systemd[1]: libpod-ffd5cd1b2e9428c8affcc7a373f5a4cc385fca3f243cb13147cbfa71fee04f3a.scope: Deactivated successfully.
Feb 02 11:23:50 compute-0 podman[173867]: 2026-02-02 11:23:50.133661248 +0000 UTC m=+0.791261209 container died ffd5cd1b2e9428c8affcc7a373f5a4cc385fca3f243cb13147cbfa71fee04f3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_curie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-29e881d0a435d383c48675c35f929b20aa25c2cdafd09ba3d56500b6f13aa94b-merged.mount: Deactivated successfully.
Feb 02 11:23:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:50.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:50 compute-0 podman[173867]: 2026-02-02 11:23:50.192614045 +0000 UTC m=+0.850214006 container remove ffd5cd1b2e9428c8affcc7a373f5a4cc385fca3f243cb13147cbfa71fee04f3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_curie, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:23:50 compute-0 systemd[1]: libpod-conmon-ffd5cd1b2e9428c8affcc7a373f5a4cc385fca3f243cb13147cbfa71fee04f3a.scope: Deactivated successfully.
Feb 02 11:23:50 compute-0 sudo[173757]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:23:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:23:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:23:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:23:50 compute-0 sudo[173998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:23:50 compute-0 sudo[173998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:23:50 compute-0 sudo[173998]: pam_unix(sudo:session): session closed for user root
Feb 02 11:23:50 compute-0 podman[174022]: 2026-02-02 11:23:50.451822886 +0000 UTC m=+0.086538315 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 02 11:23:50 compute-0 ceph-mon[74676]: pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:23:50 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:23:50 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:23:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:50 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:51 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb368003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:23:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:51 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:23:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:52.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:23:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:23:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:52.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:23:52 compute-0 ceph-mon[74676]: pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:23:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:52 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378003480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:23:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:23:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:54.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:54.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:54 compute-0 ceph-mon[74676]: pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:23:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:54 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378003480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:23:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:56.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:56.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:23:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:56 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:23:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:56 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:23:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:56 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:56] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Feb 02 11:23:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:23:56] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Feb 02 11:23:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:23:57.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:23:57 compute-0 ceph-mon[74676]: pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:23:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:57 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:23:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:57 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:23:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:23:58.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:23:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:23:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:23:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:23:58.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:23:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:58 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:59 compute-0 ceph-mon[74676]: pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:23:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:59 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:23:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:59 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:23:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:23:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:23:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:23:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:23:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:23:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:23:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:23:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:23:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:23:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:23:59 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:00.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:24:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:00.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:00 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:01 compute-0 ceph-mon[74676]: pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:24:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:01 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:24:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:01 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:02.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:02.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:02 compute-0 sudo[179873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:24:02 compute-0 sudo[179873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:02 compute-0 sudo[179873]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:02 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:03 compute-0 ceph-mon[74676]: pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:24:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:03 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:24:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:03 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:04.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:04.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:04 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:05 compute-0 ceph-mon[74676]: pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:24:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:05 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:24:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:05 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:24:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:06.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:24:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:06.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112406 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:24:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:06 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:06] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Feb 02 11:24:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:06] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Feb 02 11:24:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:24:07.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:24:07 compute-0 ceph-mon[74676]: pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Feb 02 11:24:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:07 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:24:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:07 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:24:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:08.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:24:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:08.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:08 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:09 compute-0 ceph-mon[74676]: pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:24:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:09 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:24:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:09 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:10.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:10.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:10 compute-0 ceph-mon[74676]: pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:24:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:10 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:11 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:24:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:11 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:12.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:24:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:12.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:24:12 compute-0 ceph-mon[74676]: pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:24:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:12 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:13 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:24:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:13 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:24:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:14.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:24:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:24:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:14.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:24:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:24:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:24:14 compute-0 ceph-mon[74676]: pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:24:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:24:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:14 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:15 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:24:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:15 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:24:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:16.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:24:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:16.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:16 compute-0 ceph-mon[74676]: pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:24:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:16 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:16] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:24:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:16] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:24:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:24:17.016Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:24:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:24:17.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:24:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:17 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:17 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:24:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:18.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:24:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:18.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:18 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:19 compute-0 ceph-mon[74676]: pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:19 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:19 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c0040a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:24:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:20.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:24:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:24:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:20.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:24:20 compute-0 podman[190983]: 2026-02-02 11:24:20.32067875 +0000 UTC m=+0.103043174 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 11:24:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:20 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:21 compute-0 ceph-mon[74676]: pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:21 compute-0 podman[191011]: 2026-02-02 11:24:21.299616241 +0000 UTC m=+0.093009138 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 02 11:24:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:21 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:24:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:21 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:24:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:22.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:24:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:22.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:22 compute-0 ceph-mon[74676]: pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:24:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:24:22.656 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:24:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:24:22.657 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:24:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:24:22.657 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:24:22 compute-0 sudo[191031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:24:22 compute-0 sudo[191031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:22 compute-0 sudo[191031]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:22 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:23 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:23 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:24.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:24.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:24 compute-0 ceph-mon[74676]: pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:24 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c0040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:24:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:25 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:26.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:26.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:26 compute-0 ceph-mon[74676]: pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:24:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:26 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:26] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:24:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:26] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:24:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:24:27.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:24:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:27 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:27 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:24:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:28.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:24:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:28.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:28 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:29 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:24:29
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.log', '.rgw.root', '.nfs', 'default.rgw.control', 'images', 'volumes', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data']
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:24:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:24:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:24:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:24:29 compute-0 ceph-mon[74676]: pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:29 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:29 compute-0 kernel: SELinux:  Converting 2786 SID table entries...
Feb 02 11:24:29 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 11:24:29 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 11:24:29 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 11:24:29 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 11:24:29 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 11:24:29 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 11:24:29 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 11:24:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:30.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:30.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:24:30 compute-0 ceph-mon[74676]: pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:30 compute-0 groupadd[191077]: group added to /etc/group: name=dnsmasq, GID=992
Feb 02 11:24:30 compute-0 groupadd[191077]: group added to /etc/gshadow: name=dnsmasq
Feb 02 11:24:30 compute-0 groupadd[191077]: new group: name=dnsmasq, GID=992
Feb 02 11:24:30 compute-0 useradd[191084]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Feb 02 11:24:30 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 02 11:24:30 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb 02 11:24:30 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 02 11:24:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:30 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:31 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:31 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Check health
Feb 02 11:24:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:24:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:31 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:32 compute-0 groupadd[191098]: group added to /etc/group: name=clevis, GID=991
Feb 02 11:24:32 compute-0 groupadd[191098]: group added to /etc/gshadow: name=clevis
Feb 02 11:24:32 compute-0 groupadd[191098]: new group: name=clevis, GID=991
Feb 02 11:24:32 compute-0 useradd[191105]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Feb 02 11:24:32 compute-0 usermod[191115]: add 'clevis' to group 'tss'
Feb 02 11:24:32 compute-0 usermod[191115]: add 'clevis' to shadow group 'tss'
Feb 02 11:24:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:32.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:24:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:24:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:32 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:33 compute-0 ceph-mon[74676]: pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:24:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:33 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:33 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:34.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:24:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:34.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:24:34 compute-0 ceph-mon[74676]: pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:34 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:35 compute-0 polkitd[43551]: Reloading rules
Feb 02 11:24:35 compute-0 polkitd[43551]: Collecting garbage unconditionally...
Feb 02 11:24:35 compute-0 polkitd[43551]: Loading rules from directory /etc/polkit-1/rules.d
Feb 02 11:24:35 compute-0 polkitd[43551]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 02 11:24:35 compute-0 polkitd[43551]: Finished loading, compiling and executing 3 rules
Feb 02 11:24:35 compute-0 polkitd[43551]: Reloading rules
Feb 02 11:24:35 compute-0 polkitd[43551]: Collecting garbage unconditionally...
Feb 02 11:24:35 compute-0 polkitd[43551]: Loading rules from directory /etc/polkit-1/rules.d
Feb 02 11:24:35 compute-0 polkitd[43551]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 02 11:24:35 compute-0 polkitd[43551]: Finished loading, compiling and executing 3 rules
Feb 02 11:24:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:35 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:24:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:35 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:36 compute-0 groupadd[191309]: group added to /etc/group: name=ceph, GID=167
Feb 02 11:24:36 compute-0 groupadd[191309]: group added to /etc/gshadow: name=ceph
Feb 02 11:24:36 compute-0 groupadd[191309]: new group: name=ceph, GID=167
Feb 02 11:24:36 compute-0 useradd[191315]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Feb 02 11:24:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:24:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:36.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:24:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:36.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:36 compute-0 ceph-mon[74676]: pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:24:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:36] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Feb 02 11:24:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:36] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Feb 02 11:24:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:36 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:24:37.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:24:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:24:37.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:24:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:37 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c003d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:38.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:38.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:38 compute-0 ceph-mon[74676]: pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:38 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:39 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:39 compute-0 sshd[1007]: Received signal 15; terminating.
Feb 02 11:24:39 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Feb 02 11:24:39 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Feb 02 11:24:39 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Feb 02 11:24:39 compute-0 systemd[1]: sshd.service: Consumed 2.574s CPU time, read 32.0K from disk, written 36.0K to disk.
Feb 02 11:24:39 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Feb 02 11:24:39 compute-0 systemd[1]: Stopping sshd-keygen.target...
Feb 02 11:24:39 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 11:24:39 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 11:24:39 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 11:24:39 compute-0 systemd[1]: Reached target sshd-keygen.target.
Feb 02 11:24:39 compute-0 systemd[1]: Starting OpenSSH server daemon...
Feb 02 11:24:39 compute-0 sshd[192013]: Server listening on 0.0.0.0 port 22.
Feb 02 11:24:39 compute-0 sshd[192013]: Server listening on :: port 22.
Feb 02 11:24:39 compute-0 systemd[1]: Started OpenSSH server daemon.
Feb 02 11:24:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112439 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:24:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:39 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:40.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:40.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:40 compute-0 ceph-mon[74676]: pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:41 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:41 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 11:24:41 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 11:24:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:41 compute-0 systemd[1]: Reloading.
Feb 02 11:24:41 compute-0 systemd-rc-local-generator[192271]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:24:41 compute-0 systemd-sysv-generator[192275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:24:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:41 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:41 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 11:24:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:24:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:41 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb358001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:24:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:42.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:24:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:42.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:42 compute-0 sudo[194381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:24:42 compute-0 sudo[194381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:42 compute-0 sudo[194381]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:43 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:43 compute-0 ceph-mon[74676]: pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:24:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:43 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb35c003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:24:43 compute-0 sudo[172930]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:43 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:24:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:44.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:24:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:44.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:44 compute-0 sudo[196707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdlprxferflkdowvelzekwilbmeolmnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031483.9165316-963-166120966579978/AnsiballZ_systemd.py'
Feb 02 11:24:44 compute-0 sudo[196707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:24:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:24:44 compute-0 python3.9[196726]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 11:24:44 compute-0 systemd[1]: Reloading.
Feb 02 11:24:44 compute-0 systemd-rc-local-generator[197301]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:24:45 compute-0 systemd-sysv-generator[197308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:24:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:45 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:45 compute-0 ceph-mon[74676]: pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:24:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:24:45 compute-0 sudo[196707]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:45 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:45 compute-0 sudo[198136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cltgqwxsithtbkeodjjirmabpbggfspk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031485.3501096-963-259193397682013/AnsiballZ_systemd.py'
Feb 02 11:24:45 compute-0 sudo[198136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:45 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:45 compute-0 python3.9[198157]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 11:24:45 compute-0 systemd[1]: Reloading.
Feb 02 11:24:46 compute-0 systemd-sysv-generator[198684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:24:46 compute-0 systemd-rc-local-generator[198680]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:24:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:46.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:46.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:46 compute-0 sudo[198136]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:46 compute-0 sudo[199518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqowfmyymnewjrezzgvbqetvrdgwlpkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031486.4538248-963-65225565960736/AnsiballZ_systemd.py'
Feb 02 11:24:46 compute-0 sudo[199518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:46] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:24:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:46] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:24:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:47 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:24:47.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:24:47 compute-0 python3.9[199543]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 11:24:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:24:47.031Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:24:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:24:47.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:24:47 compute-0 systemd[1]: Reloading.
Feb 02 11:24:47 compute-0 ceph-mon[74676]: pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:24:47 compute-0 systemd-rc-local-generator[200164]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:24:47 compute-0 systemd-sysv-generator[200168]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:24:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:47 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:47 compute-0 sudo[199518]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:24:47 compute-0 sudo[200921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vymkeyrlqvcnduyjrvkfodckblvhduhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031487.532501-963-137668276681392/AnsiballZ_systemd.py'
Feb 02 11:24:47 compute-0 sudo[200921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:47 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=cleanup t=2026-02-02T11:24:48.000225019Z level=info msg="Completed cleanup jobs" duration=27.057469ms
Feb 02 11:24:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=grafana.update.checker t=2026-02-02T11:24:48.090011795Z level=info msg="Update check succeeded" duration=53.211652ms
Feb 02 11:24:48 compute-0 python3.9[200943]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 11:24:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=plugins.update.checker t=2026-02-02T11:24:48.133199909Z level=info msg="Update check succeeded" duration=94.08598ms
Feb 02 11:24:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:24:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:48.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:24:48 compute-0 systemd[1]: Reloading.
Feb 02 11:24:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:24:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:48.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:24:48 compute-0 systemd-sysv-generator[201430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:24:48 compute-0 systemd-rc-local-generator[201427]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:24:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 11:24:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 11:24:48 compute-0 systemd[1]: man-db-cache-update.service: Consumed 8.930s CPU time.
Feb 02 11:24:48 compute-0 systemd[1]: run-r6708bc2835af4109a4b97043677951a1.service: Deactivated successfully.
Feb 02 11:24:48 compute-0 sudo[200921]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:48 compute-0 sudo[201610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmrzwzkhcckrnuryiezgoehzujgjbaky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031488.6589718-1050-44093277788925/AnsiballZ_systemd.py'
Feb 02 11:24:48 compute-0 sudo[201610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:49 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:49 compute-0 ceph-mon[74676]: pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:24:49 compute-0 python3.9[201612]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:24:49 compute-0 systemd[1]: Reloading.
Feb 02 11:24:49 compute-0 systemd-rc-local-generator[201642]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:24:49 compute-0 systemd-sysv-generator[201645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:24:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:49 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:49 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:24:49 compute-0 sudo[201610]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:24:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:49 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:50 compute-0 sudo[201801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujtypunycjsmptqhnrhavezdelgsasno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031489.7333272-1050-262627896241266/AnsiballZ_systemd.py'
Feb 02 11:24:50 compute-0 sudo[201801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:50.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:50 compute-0 python3.9[201803]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:24:50 compute-0 systemd[1]: Reloading.
Feb 02 11:24:50 compute-0 podman[201805]: 2026-02-02 11:24:50.495831902 +0000 UTC m=+0.117168916 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:24:50 compute-0 systemd-sysv-generator[201862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:24:50 compute-0 systemd-rc-local-generator[201857]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:24:50 compute-0 sudo[201868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:24:50 compute-0 sudo[201868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:50 compute-0 sudo[201868]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:50 compute-0 sudo[201801]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:50 compute-0 sudo[201893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:24:50 compute-0 sudo[201893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:51 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:51 compute-0 sudo[202087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upsmyvfbkknviafhjzprrkspitcagphf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031490.8846018-1050-239677464740907/AnsiballZ_systemd.py'
Feb 02 11:24:51 compute-0 sudo[202087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:51 compute-0 ceph-mon[74676]: pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:24:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb 02 11:24:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:24:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:51 compute-0 sudo[201893]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 11:24:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:24:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:51 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:51 compute-0 python3.9[202089]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:24:51 compute-0 systemd[1]: Reloading.
Feb 02 11:24:51 compute-0 podman[202105]: 2026-02-02 11:24:51.59803385 +0000 UTC m=+0.065312702 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 02 11:24:51 compute-0 systemd-sysv-generator[202158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:24:51 compute-0 systemd-rc-local-generator[202154]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:24:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:24:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:51 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:51 compute-0 sudo[202087]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:52.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:24:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:24:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:52.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:52 compute-0 sudo[202311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udindpuouztbuipybibqiavnvoycjdfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031492.0348332-1050-69708661289504/AnsiballZ_systemd.py'
Feb 02 11:24:52 compute-0 sudo[202311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:52 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:24:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:52 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:24:52 compute-0 python3.9[202313]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:24:52 compute-0 sudo[202311]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:24:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:24:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:53 compute-0 sudo[202467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txdbfjkpmufgccbytnzwctwfdeomqyfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031492.8328545-1050-276991646489951/AnsiballZ_systemd.py'
Feb 02 11:24:53 compute-0 sudo[202467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:53 compute-0 ceph-mon[74676]: pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:24:53 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:53 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:53 compute-0 python3.9[202469]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:24:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb360004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:53 compute-0 systemd[1]: Reloading.
Feb 02 11:24:53 compute-0 systemd-sysv-generator[202501]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:24:53 compute-0 systemd-rc-local-generator[202498]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:24:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb 02 11:24:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:24:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:24:53 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:24:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:24:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:24:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:24:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:24:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:24:53 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:24:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:24:53 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:24:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:24:53 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:24:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:24:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:53 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:53 compute-0 sudo[202509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:24:53 compute-0 sudo[202509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:53 compute-0 sudo[202509]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:53 compute-0 sudo[202467]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:53 compute-0 sudo[202534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:24:53 compute-0 sudo[202534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:54.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:54 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:24:54 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:24:54 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:24:54 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:54 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:54 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:24:54 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:24:54 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:24:54 compute-0 ceph-mon[74676]: pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:24:54 compute-0 podman[202695]: 2026-02-02 11:24:54.35980555 +0000 UTC m=+0.065093246 container create 9bc490878d17cf29b908cced14a9858563755a3d29b7c3872b58334baf060a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_poincare, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:24:54 compute-0 systemd[1]: Started libpod-conmon-9bc490878d17cf29b908cced14a9858563755a3d29b7c3872b58334baf060a07.scope.
Feb 02 11:24:54 compute-0 podman[202695]: 2026-02-02 11:24:54.318934383 +0000 UTC m=+0.024222109 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:24:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:24:54 compute-0 sudo[202763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksrqvsupxlmqnyrqymrpeisrbcslumch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031494.0826297-1158-274734341180228/AnsiballZ_systemd.py'
Feb 02 11:24:54 compute-0 sudo[202763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:54 compute-0 podman[202695]: 2026-02-02 11:24:54.446048834 +0000 UTC m=+0.151336550 container init 9bc490878d17cf29b908cced14a9858563755a3d29b7c3872b58334baf060a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_poincare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Feb 02 11:24:54 compute-0 podman[202695]: 2026-02-02 11:24:54.454003433 +0000 UTC m=+0.159291129 container start 9bc490878d17cf29b908cced14a9858563755a3d29b7c3872b58334baf060a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_poincare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:24:54 compute-0 podman[202695]: 2026-02-02 11:24:54.460052738 +0000 UTC m=+0.165340454 container attach 9bc490878d17cf29b908cced14a9858563755a3d29b7c3872b58334baf060a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_poincare, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:24:54 compute-0 gallant_poincare[202761]: 167 167
Feb 02 11:24:54 compute-0 systemd[1]: libpod-9bc490878d17cf29b908cced14a9858563755a3d29b7c3872b58334baf060a07.scope: Deactivated successfully.
Feb 02 11:24:54 compute-0 podman[202695]: 2026-02-02 11:24:54.468075709 +0000 UTC m=+0.173363405 container died 9bc490878d17cf29b908cced14a9858563755a3d29b7c3872b58334baf060a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb 02 11:24:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-89bc755362aa76db0af3337d05e2fd41357404ec23ca8a0dc1a1149ff5eefdca-merged.mount: Deactivated successfully.
Feb 02 11:24:54 compute-0 podman[202695]: 2026-02-02 11:24:54.677227593 +0000 UTC m=+0.382515289 container remove 9bc490878d17cf29b908cced14a9858563755a3d29b7c3872b58334baf060a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_poincare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:24:54 compute-0 python3.9[202766]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 11:24:54 compute-0 systemd[1]: libpod-conmon-9bc490878d17cf29b908cced14a9858563755a3d29b7c3872b58334baf060a07.scope: Deactivated successfully.
Feb 02 11:24:54 compute-0 systemd[1]: Reloading.
Feb 02 11:24:54 compute-0 podman[202793]: 2026-02-02 11:24:54.821378525 +0000 UTC m=+0.039967582 container create 8e0e63ba6ad026e4caaaee4a34ec0d4d1052e5bbb32d883e1d3390a09dcc1cd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:24:54 compute-0 systemd-rc-local-generator[202840]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:24:54 compute-0 systemd-sysv-generator[202843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:24:54 compute-0 podman[202793]: 2026-02-02 11:24:54.805143088 +0000 UTC m=+0.023732175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:24:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:55 compute-0 systemd[1]: Started libpod-conmon-8e0e63ba6ad026e4caaaee4a34ec0d4d1052e5bbb32d883e1d3390a09dcc1cd8.scope.
Feb 02 11:24:55 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Feb 02 11:24:55 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb 02 11:24:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a47762ced50ff21b6000a0bd7435c94121afa3b4739b1215a0103ca57b1aa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a47762ced50ff21b6000a0bd7435c94121afa3b4739b1215a0103ca57b1aa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a47762ced50ff21b6000a0bd7435c94121afa3b4739b1215a0103ca57b1aa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a47762ced50ff21b6000a0bd7435c94121afa3b4739b1215a0103ca57b1aa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a47762ced50ff21b6000a0bd7435c94121afa3b4739b1215a0103ca57b1aa1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:55 compute-0 podman[202793]: 2026-02-02 11:24:55.160810782 +0000 UTC m=+0.379399839 container init 8e0e63ba6ad026e4caaaee4a34ec0d4d1052e5bbb32d883e1d3390a09dcc1cd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:24:55 compute-0 sudo[202763]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:55 compute-0 podman[202793]: 2026-02-02 11:24:55.168805732 +0000 UTC m=+0.387394789 container start 8e0e63ba6ad026e4caaaee4a34ec0d4d1052e5bbb32d883e1d3390a09dcc1cd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:24:55 compute-0 podman[202793]: 2026-02-02 11:24:55.176084642 +0000 UTC m=+0.394673699 container attach 8e0e63ba6ad026e4caaaee4a34ec0d4d1052e5bbb32d883e1d3390a09dcc1cd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:24:55 compute-0 auditd[704]: Audit daemon rotating log files
Feb 02 11:24:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:55 compute-0 stupefied_mendeleev[202850]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:24:55 compute-0 stupefied_mendeleev[202850]: --> All data devices are unavailable
Feb 02 11:24:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:24:55 compute-0 systemd[1]: libpod-8e0e63ba6ad026e4caaaee4a34ec0d4d1052e5bbb32d883e1d3390a09dcc1cd8.scope: Deactivated successfully.
Feb 02 11:24:55 compute-0 podman[202793]: 2026-02-02 11:24:55.556357514 +0000 UTC m=+0.774946581 container died 8e0e63ba6ad026e4caaaee4a34ec0d4d1052e5bbb32d883e1d3390a09dcc1cd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:24:55 compute-0 sudo[203016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcekjzbecvlnccemsqnhzzboiqdwzqjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031495.315621-1182-167823814902036/AnsiballZ_systemd.py'
Feb 02 11:24:55 compute-0 sudo[203016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9a47762ced50ff21b6000a0bd7435c94121afa3b4739b1215a0103ca57b1aa1-merged.mount: Deactivated successfully.
Feb 02 11:24:55 compute-0 podman[202793]: 2026-02-02 11:24:55.611946175 +0000 UTC m=+0.830535232 container remove 8e0e63ba6ad026e4caaaee4a34ec0d4d1052e5bbb32d883e1d3390a09dcc1cd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:24:55 compute-0 systemd[1]: libpod-conmon-8e0e63ba6ad026e4caaaee4a34ec0d4d1052e5bbb32d883e1d3390a09dcc1cd8.scope: Deactivated successfully.
Feb 02 11:24:55 compute-0 sudo[202534]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:55 compute-0 sudo[203032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:24:55 compute-0 sudo[203032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:55 compute-0 sudo[203032]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:24:55 compute-0 sudo[203057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:24:55 compute-0 sudo[203057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:55 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:55 compute-0 python3.9[203019]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:24:55 compute-0 sudo[203016]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:56 compute-0 podman[203172]: 2026-02-02 11:24:56.14642229 +0000 UTC m=+0.041812706 container create 21641f873658058056a950587456d49a89d29ea4d2bea558251e704e8d49f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb 02 11:24:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:56.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:56 compute-0 systemd[1]: Started libpod-conmon-21641f873658058056a950587456d49a89d29ea4d2bea558251e704e8d49f70f.scope.
Feb 02 11:24:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:24:56 compute-0 podman[203172]: 2026-02-02 11:24:56.128958957 +0000 UTC m=+0.024349403 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:24:56 compute-0 podman[203172]: 2026-02-02 11:24:56.226297501 +0000 UTC m=+0.121687937 container init 21641f873658058056a950587456d49a89d29ea4d2bea558251e704e8d49f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:24:56 compute-0 podman[203172]: 2026-02-02 11:24:56.233458487 +0000 UTC m=+0.128848903 container start 21641f873658058056a950587456d49a89d29ea4d2bea558251e704e8d49f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb 02 11:24:56 compute-0 podman[203172]: 2026-02-02 11:24:56.237774041 +0000 UTC m=+0.133164467 container attach 21641f873658058056a950587456d49a89d29ea4d2bea558251e704e8d49f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Feb 02 11:24:56 compute-0 musing_yonath[203224]: 167 167
Feb 02 11:24:56 compute-0 systemd[1]: libpod-21641f873658058056a950587456d49a89d29ea4d2bea558251e704e8d49f70f.scope: Deactivated successfully.
Feb 02 11:24:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:24:56 compute-0 podman[203172]: 2026-02-02 11:24:56.242479437 +0000 UTC m=+0.137869853 container died 21641f873658058056a950587456d49a89d29ea4d2bea558251e704e8d49f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:24:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:56.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-558ba4d304a39954e26ac9f25278d2687a559b1ad1846d5c7c9f8a9f60bf5549-merged.mount: Deactivated successfully.
Feb 02 11:24:56 compute-0 podman[203172]: 2026-02-02 11:24:56.280273805 +0000 UTC m=+0.175664221 container remove 21641f873658058056a950587456d49a89d29ea4d2bea558251e704e8d49f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:24:56 compute-0 systemd[1]: libpod-conmon-21641f873658058056a950587456d49a89d29ea4d2bea558251e704e8d49f70f.scope: Deactivated successfully.
Feb 02 11:24:56 compute-0 sudo[203311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noeterwrbhwpzxxhuhswrcamcxawuobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031496.0971725-1182-49556634832431/AnsiballZ_systemd.py'
Feb 02 11:24:56 compute-0 sudo[203311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:56 compute-0 podman[203319]: 2026-02-02 11:24:56.437628737 +0000 UTC m=+0.047040026 container create 8c703187eaaf2ca3668039119b69055dfb8090b3c669b36b3cef1f479a180c9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swanson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:24:56 compute-0 systemd[1]: Started libpod-conmon-8c703187eaaf2ca3668039119b69055dfb8090b3c669b36b3cef1f479a180c9b.scope.
Feb 02 11:24:56 compute-0 podman[203319]: 2026-02-02 11:24:56.413167893 +0000 UTC m=+0.022579212 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:24:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a45ef807d7b56f4d4d51393fbc2de3ed88853aad867fe82d3f008712da0d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a45ef807d7b56f4d4d51393fbc2de3ed88853aad867fe82d3f008712da0d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a45ef807d7b56f4d4d51393fbc2de3ed88853aad867fe82d3f008712da0d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a45ef807d7b56f4d4d51393fbc2de3ed88853aad867fe82d3f008712da0d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:56 compute-0 podman[203319]: 2026-02-02 11:24:56.532593593 +0000 UTC m=+0.142004912 container init 8c703187eaaf2ca3668039119b69055dfb8090b3c669b36b3cef1f479a180c9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swanson, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:24:56 compute-0 podman[203319]: 2026-02-02 11:24:56.539289725 +0000 UTC m=+0.148701024 container start 8c703187eaaf2ca3668039119b69055dfb8090b3c669b36b3cef1f479a180c9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:24:56 compute-0 podman[203319]: 2026-02-02 11:24:56.543498517 +0000 UTC m=+0.152909826 container attach 8c703187eaaf2ca3668039119b69055dfb8090b3c669b36b3cef1f479a180c9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:24:56 compute-0 python3.9[203316]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:24:56 compute-0 sudo[203311]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:56 compute-0 eager_swanson[203336]: {
Feb 02 11:24:56 compute-0 eager_swanson[203336]:     "1": [
Feb 02 11:24:56 compute-0 eager_swanson[203336]:         {
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "devices": [
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "/dev/loop3"
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             ],
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "lv_name": "ceph_lv0",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "lv_size": "21470642176",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "name": "ceph_lv0",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "tags": {
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.cluster_name": "ceph",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.crush_device_class": "",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.encrypted": "0",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.osd_id": "1",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.type": "block",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.vdo": "0",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:                 "ceph.with_tpm": "0"
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             },
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "type": "block",
Feb 02 11:24:56 compute-0 eager_swanson[203336]:             "vg_name": "ceph_vg0"
Feb 02 11:24:56 compute-0 eager_swanson[203336]:         }
Feb 02 11:24:56 compute-0 eager_swanson[203336]:     ]
Feb 02 11:24:56 compute-0 eager_swanson[203336]: }
Feb 02 11:24:56 compute-0 ceph-mon[74676]: pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:24:56 compute-0 systemd[1]: libpod-8c703187eaaf2ca3668039119b69055dfb8090b3c669b36b3cef1f479a180c9b.scope: Deactivated successfully.
Feb 02 11:24:56 compute-0 podman[203319]: 2026-02-02 11:24:56.859705205 +0000 UTC m=+0.469116514 container died 8c703187eaaf2ca3668039119b69055dfb8090b3c669b36b3cef1f479a180c9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swanson, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb 02 11:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa0a45ef807d7b56f4d4d51393fbc2de3ed88853aad867fe82d3f008712da0d2-merged.mount: Deactivated successfully.
Feb 02 11:24:56 compute-0 podman[203319]: 2026-02-02 11:24:56.926924971 +0000 UTC m=+0.536336260 container remove 8c703187eaaf2ca3668039119b69055dfb8090b3c669b36b3cef1f479a180c9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:24:56 compute-0 systemd[1]: libpod-conmon-8c703187eaaf2ca3668039119b69055dfb8090b3c669b36b3cef1f479a180c9b.scope: Deactivated successfully.
Feb 02 11:24:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:56] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:24:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:24:56] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:24:56 compute-0 sudo[203057]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:57 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:24:57.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:24:57 compute-0 sudo[203439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:24:57 compute-0 sudo[203439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:57 compute-0 sudo[203439]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:57 compute-0 sudo[203487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:24:57 compute-0 sudo[203487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:57 compute-0 sudo[203562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmcbypauzxnvssyhuteahtiqalddzkoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031496.895389-1182-152040664296998/AnsiballZ_systemd.py'
Feb 02 11:24:57 compute-0 sudo[203562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:57 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:57 compute-0 podman[203605]: 2026-02-02 11:24:57.510609542 +0000 UTC m=+0.038437798 container create 7ddf872d183085d97afe56dc58fcef1824e78aa97ee94a67b1afaea85ac12cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moore, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:24:57 compute-0 python3.9[203564]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:24:57 compute-0 systemd[1]: Started libpod-conmon-7ddf872d183085d97afe56dc58fcef1824e78aa97ee94a67b1afaea85ac12cdc.scope.
Feb 02 11:24:57 compute-0 podman[203605]: 2026-02-02 11:24:57.494073166 +0000 UTC m=+0.021901442 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:24:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:24:57 compute-0 podman[203605]: 2026-02-02 11:24:57.602861489 +0000 UTC m=+0.130689775 container init 7ddf872d183085d97afe56dc58fcef1824e78aa97ee94a67b1afaea85ac12cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moore, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:24:57 compute-0 podman[203605]: 2026-02-02 11:24:57.610275453 +0000 UTC m=+0.138103709 container start 7ddf872d183085d97afe56dc58fcef1824e78aa97ee94a67b1afaea85ac12cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moore, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:24:57 compute-0 podman[203605]: 2026-02-02 11:24:57.614619628 +0000 UTC m=+0.142447904 container attach 7ddf872d183085d97afe56dc58fcef1824e78aa97ee94a67b1afaea85ac12cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:24:57 compute-0 gracious_moore[203623]: 167 167
Feb 02 11:24:57 compute-0 systemd[1]: libpod-7ddf872d183085d97afe56dc58fcef1824e78aa97ee94a67b1afaea85ac12cdc.scope: Deactivated successfully.
Feb 02 11:24:57 compute-0 podman[203605]: 2026-02-02 11:24:57.618679105 +0000 UTC m=+0.146507381 container died 7ddf872d183085d97afe56dc58fcef1824e78aa97ee94a67b1afaea85ac12cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-54c4db0f6a2d9237738b7810e32128bf0f571c5e6acde6e9dc4e860cc712de40-merged.mount: Deactivated successfully.
Feb 02 11:24:57 compute-0 sudo[203562]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:57 compute-0 podman[203605]: 2026-02-02 11:24:57.653850008 +0000 UTC m=+0.181678274 container remove 7ddf872d183085d97afe56dc58fcef1824e78aa97ee94a67b1afaea85ac12cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:24:57 compute-0 systemd[1]: libpod-conmon-7ddf872d183085d97afe56dc58fcef1824e78aa97ee94a67b1afaea85ac12cdc.scope: Deactivated successfully.
Feb 02 11:24:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:24:57 compute-0 podman[203680]: 2026-02-02 11:24:57.794350135 +0000 UTC m=+0.044532354 container create 9483c4564266c710821b4b63bf2c24ac438b8dbefcaf7b23345c84a5366d7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_borg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:24:57 compute-0 systemd[1]: Started libpod-conmon-9483c4564266c710821b4b63bf2c24ac438b8dbefcaf7b23345c84a5366d7c7b.scope.
Feb 02 11:24:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:57 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:57 compute-0 podman[203680]: 2026-02-02 11:24:57.776363097 +0000 UTC m=+0.026545346 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:24:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b10faeeb13f0e1c1cb2bc041b466cb7cd11fd87c7c204c4e23de481a622026/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b10faeeb13f0e1c1cb2bc041b466cb7cd11fd87c7c204c4e23de481a622026/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b10faeeb13f0e1c1cb2bc041b466cb7cd11fd87c7c204c4e23de481a622026/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b10faeeb13f0e1c1cb2bc041b466cb7cd11fd87c7c204c4e23de481a622026/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:24:57 compute-0 podman[203680]: 2026-02-02 11:24:57.894234302 +0000 UTC m=+0.144416521 container init 9483c4564266c710821b4b63bf2c24ac438b8dbefcaf7b23345c84a5366d7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:24:57 compute-0 podman[203680]: 2026-02-02 11:24:57.901976065 +0000 UTC m=+0.152158284 container start 9483c4564266c710821b4b63bf2c24ac438b8dbefcaf7b23345c84a5366d7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_borg, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:24:57 compute-0 podman[203680]: 2026-02-02 11:24:57.908466872 +0000 UTC m=+0.158649111 container attach 9483c4564266c710821b4b63bf2c24ac438b8dbefcaf7b23345c84a5366d7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 11:24:57 compute-0 sudo[203818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idjbynaujpmcpfpqattucnweetbfjngr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031497.757652-1182-170359470267935/AnsiballZ_systemd.py'
Feb 02 11:24:58 compute-0 sudo[203818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:24:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:24:58.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:24:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:24:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:24:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:24:58.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:24:58 compute-0 python3.9[203820]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:24:58 compute-0 sudo[203818]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:58 compute-0 lvm[203940]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:24:58 compute-0 lvm[203940]: VG ceph_vg0 finished
Feb 02 11:24:58 compute-0 lvm[203971]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:24:58 compute-0 lvm[203971]: VG ceph_vg0 finished
Feb 02 11:24:58 compute-0 heuristic_borg[203748]: {}
Feb 02 11:24:58 compute-0 systemd[1]: libpod-9483c4564266c710821b4b63bf2c24ac438b8dbefcaf7b23345c84a5366d7c7b.scope: Deactivated successfully.
Feb 02 11:24:58 compute-0 systemd[1]: libpod-9483c4564266c710821b4b63bf2c24ac438b8dbefcaf7b23345c84a5366d7c7b.scope: Consumed 1.092s CPU time.
Feb 02 11:24:58 compute-0 podman[203680]: 2026-02-02 11:24:58.655759187 +0000 UTC m=+0.905941406 container died 9483c4564266c710821b4b63bf2c24ac438b8dbefcaf7b23345c84a5366d7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-51b10faeeb13f0e1c1cb2bc041b466cb7cd11fd87c7c204c4e23de481a622026-merged.mount: Deactivated successfully.
Feb 02 11:24:58 compute-0 podman[203680]: 2026-02-02 11:24:58.719526963 +0000 UTC m=+0.969709182 container remove 9483c4564266c710821b4b63bf2c24ac438b8dbefcaf7b23345c84a5366d7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:24:58 compute-0 systemd[1]: libpod-conmon-9483c4564266c710821b4b63bf2c24ac438b8dbefcaf7b23345c84a5366d7c7b.scope: Deactivated successfully.
Feb 02 11:24:58 compute-0 sudo[203487]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:24:58 compute-0 sudo[204058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-macypfumrzzkymjskgrecjsbiubgrnpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031498.5322506-1182-259676158942739/AnsiballZ_systemd.py'
Feb 02 11:24:58 compute-0 sudo[204058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:58 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:24:58 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:58 compute-0 sudo[204061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:24:58 compute-0 ceph-mon[74676]: pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:24:58 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:58 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:24:58 compute-0 sudo[204061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:24:58 compute-0 sudo[204061]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:59 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:59 compute-0 python3.9[204060]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:24:59 compute-0 sudo[204058]: pam_unix(sudo:session): session closed for user root
Feb 02 11:24:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:59 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:24:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:24:59 compute-0 sudo[204240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcmcjoecgcyfftcqysbjsdewjysqunln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031499.348398-1182-3016766993744/AnsiballZ_systemd.py'
Feb 02 11:24:59 compute-0 sudo[204240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:24:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:24:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:24:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:24:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:24:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:24:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:24:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:24:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:24:59 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:24:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:24:59 compute-0 python3.9[204242]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:25:00 compute-0 sudo[204240]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:25:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:00.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:25:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:00.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:00 compute-0 sudo[204395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrpdnrafclihvzbwhbsdctdezreuqgrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031500.1488013-1182-82613745542290/AnsiballZ_systemd.py'
Feb 02 11:25:00 compute-0 sudo[204395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:00 compute-0 python3.9[204397]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:25:00 compute-0 sudo[204395]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:00 compute-0 ceph-mon[74676]: pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:25:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:25:01 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:01 compute-0 sudo[204551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuowxleuaytvifiosfqdmskvpqqlomcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031500.9103627-1182-199028465731770/AnsiballZ_systemd.py'
Feb 02 11:25:01 compute-0 sudo[204551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:25:01 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388005090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:01 compute-0 python3.9[204553]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:25:01 compute-0 sudo[204551]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:25:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112501 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:25:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:25:01 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb36c004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:01 compute-0 sudo[204707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhwygcpmvocdnoeeqtjfyysvjxfbjgxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031501.67028-1182-273286978629343/AnsiballZ_systemd.py'
Feb 02 11:25:01 compute-0 sudo[204707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:02.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:02 compute-0 python3.9[204709]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:25:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:25:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:02.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:25:02 compute-0 sudo[204707]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:02 compute-0 sudo[204862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhcijwvzolynpyytazyffwrlozvjkxvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031502.4414217-1182-93604050232956/AnsiballZ_systemd.py'
Feb 02 11:25:02 compute-0 sudo[204862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:02 compute-0 ceph-mon[74676]: pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:25:03 compute-0 python3.9[204864]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:25:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:25:03 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3580049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:03 compute-0 sudo[204867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:25:03 compute-0 sudo[204867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:25:03 compute-0 sudo[204867]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:03 compute-0 sudo[204862]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:25:03 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb378003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:03 compute-0 sudo[205043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzeifoegjmbfoddijiinxcekdqvmwsox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031503.194426-1182-114798996265457/AnsiballZ_systemd.py'
Feb 02 11:25:03 compute-0 sudo[205043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:03 compute-0 python3.9[205045]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:25:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:25:03 compute-0 sudo[205043]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:25:03 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388005090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:04 compute-0 sudo[205199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vquasgxyibupoplnqshhxwhtmnpshwfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031503.9011803-1182-202528310744736/AnsiballZ_systemd.py'
Feb 02 11:25:04 compute-0 sudo[205199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:04.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:04 compute-0 python3.9[205201]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:25:04 compute-0 sudo[205199]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:04 compute-0 sudo[205355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otccepzbrgcaweqwezyatltfuddxbmno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031504.6546023-1182-67223773417654/AnsiballZ_systemd.py'
Feb 02 11:25:04 compute-0 sudo[205355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:04 compute-0 ceph-mon[74676]: pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:25:05 compute-0 kernel: ganesha.nfsd[197327]: segfault at 50 ip 00007fb40dd8732e sp 00007fb3a17f9210 error 4 in libntirpc.so.5.8[7fb40dd6c000+2c000] likely on CPU 3 (core 0, socket 3)
Feb 02 11:25:05 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb 02 11:25:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[156023]: 02/02/2026 11:25:05 : epoch 698088b5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb388005090 fd 48 proxy ignored for local
Feb 02 11:25:05 compute-0 systemd[1]: Started Process Core Dump (PID 205358/UID 0).
Feb 02 11:25:05 compute-0 python3.9[205357]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:25:05 compute-0 sudo[205355]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:05 compute-0 sudo[205513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpwtoarztminnckuhkxasyvlkkvkhsyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031505.382021-1182-70814706461674/AnsiballZ_systemd.py'
Feb 02 11:25:05 compute-0 sudo[205513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:25:05 compute-0 python3.9[205515]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 11:25:05 compute-0 sudo[205513]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:06.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:06 compute-0 systemd-coredump[205359]: Process 156027 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 66:
                                                    #0  0x00007fb40dd8732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Feb 02 11:25:06 compute-0 systemd[1]: systemd-coredump@3-205358-0.service: Deactivated successfully.
Feb 02 11:25:06 compute-0 systemd[1]: systemd-coredump@3-205358-0.service: Consumed 1.067s CPU time.
Feb 02 11:25:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:06.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:06 compute-0 podman[205547]: 2026-02-02 11:25:06.277187223 +0000 UTC m=+0.025470215 container died 73084ba91b37e224c4e40d2346727f06385ade97ac8f938705cff67b24bc764c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d7d50841c2d8dd9f42cbc598723afd263514aafe9cb9f06fd043921b035b24a-merged.mount: Deactivated successfully.
Feb 02 11:25:06 compute-0 podman[205547]: 2026-02-02 11:25:06.309552865 +0000 UTC m=+0.057835847 container remove 73084ba91b37e224c4e40d2346727f06385ade97ac8f938705cff67b24bc764c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:25:06 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Main process exited, code=exited, status=139/n/a
Feb 02 11:25:06 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Failed with result 'exit-code'.
Feb 02 11:25:06 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.504s CPU time.
Feb 02 11:25:06 compute-0 sudo[205715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrotvczvrmbfklhhgfftguvjckrxnode ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031506.5514953-1488-13025057988443/AnsiballZ_file.py'
Feb 02 11:25:06 compute-0 sudo[205715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:06] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:25:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:06] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:25:07 compute-0 python3.9[205717]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:25:07 compute-0 ceph-mon[74676]: pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:25:07 compute-0 sudo[205715]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:25:07.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:25:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:25:07.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:25:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:25:07.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:25:07 compute-0 sudo[205868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yldfgrocngeqxlhnlohnnbfekfreknyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031507.1381917-1488-84246630674928/AnsiballZ_file.py'
Feb 02 11:25:07 compute-0 sudo[205868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:07 compute-0 python3.9[205870]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:25:07 compute-0 sudo[205868]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:25:08 compute-0 sudo[206021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsnwofncsrdqpqpybxephkrmvrrsnwxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031507.8173678-1488-104800658209863/AnsiballZ_file.py'
Feb 02 11:25:08 compute-0 sudo[206021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:25:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:08.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:25:08 compute-0 python3.9[206023]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:25:08 compute-0 sudo[206021]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:08.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:08 compute-0 sudo[206173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwmxqoctpkahaecpukvptqsbjvkrkcvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031508.3742125-1488-175225567623130/AnsiballZ_file.py'
Feb 02 11:25:08 compute-0 sudo[206173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:08 compute-0 python3.9[206175]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:25:08 compute-0 sudo[206173]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:09 compute-0 ceph-mon[74676]: pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:25:09 compute-0 sudo[206326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdzurmcvsnzmyfsvtqlqjrwikavdouzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031508.899773-1488-228371178886693/AnsiballZ_file.py'
Feb 02 11:25:09 compute-0 sudo[206326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:09 compute-0 python3.9[206328]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:25:09 compute-0 sudo[206326]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:09 compute-0 sudo[206479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agpoposlimizozflhompfekwzegulvlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031509.4469385-1488-1876534552482/AnsiballZ_file.py'
Feb 02 11:25:09 compute-0 sudo[206479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:25:09 compute-0 python3.9[206481]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:25:09 compute-0 sudo[206479]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:25:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:10.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:25:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:10.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:10 compute-0 python3.9[206631]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:25:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112511 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:25:11 compute-0 ceph-mon[74676]: pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:25:11 compute-0 sudo[206782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oavhhfpjzircsloudlaesydqbpwdnjan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031510.7897553-1641-16121129903048/AnsiballZ_stat.py'
Feb 02 11:25:11 compute-0 sudo[206782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:11 compute-0 python3.9[206784]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:11 compute-0 sudo[206782]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:25:11 compute-0 sudo[206908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sazeuipqwmuedisxyyilsnhzarjjhziw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031510.7897553-1641-16121129903048/AnsiballZ_copy.py'
Feb 02 11:25:11 compute-0 sudo[206908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:12 compute-0 python3.9[206910]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770031510.7897553-1641-16121129903048/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:12 compute-0 sudo[206908]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:12.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:12.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:12 compute-0 sudo[207060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhqxercwtsmfwewgifnvlnlvurxbjjww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031512.2161264-1641-97055870744551/AnsiballZ_stat.py'
Feb 02 11:25:12 compute-0 sudo[207060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:12 compute-0 python3.9[207062]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:12 compute-0 sudo[207060]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:12 compute-0 sudo[207186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnpdptnpbawekgrjuvcusubbgobpuukc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031512.2161264-1641-97055870744551/AnsiballZ_copy.py'
Feb 02 11:25:12 compute-0 sudo[207186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:13 compute-0 ceph-mon[74676]: pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:25:13 compute-0 python3.9[207188]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770031512.2161264-1641-97055870744551/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:13 compute-0 sudo[207186]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:13 compute-0 sudo[207338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iptltqkbyqjelsjkplxlsdfzizhsxkuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031513.2727218-1641-200070689834004/AnsiballZ_stat.py'
Feb 02 11:25:13 compute-0 sudo[207338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:13 compute-0 python3.9[207340]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:13 compute-0 sudo[207338]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:25:13 compute-0 sudo[207464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sboyqvvtdsucdaybmczvsklfvehpcaoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031513.2727218-1641-200070689834004/AnsiballZ_copy.py'
Feb 02 11:25:13 compute-0 sudo[207464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:14 compute-0 python3.9[207466]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770031513.2727218-1641-200070689834004/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:14.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:14 compute-0 sudo[207464]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:14.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:14 compute-0 sudo[207616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tluwqvhkxuoqondkfkctlcsfqpalluun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031514.2999682-1641-97746127216534/AnsiballZ_stat.py'
Feb 02 11:25:14 compute-0 sudo[207616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:25:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:25:14 compute-0 python3.9[207618]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:14 compute-0 sudo[207616]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:15 compute-0 sudo[207742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auivspxepafhsyaeyzgxmwkxoxdvakew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031514.2999682-1641-97746127216534/AnsiballZ_copy.py'
Feb 02 11:25:15 compute-0 sudo[207742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:15 compute-0 ceph-mon[74676]: pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:25:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:25:15 compute-0 python3.9[207744]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770031514.2999682-1641-97746127216534/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:15 compute-0 sudo[207742]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:15 compute-0 sudo[207895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjzjptbeymrnuaylpcwpwrnvjldziqvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031515.369163-1641-222666445165312/AnsiballZ_stat.py'
Feb 02 11:25:15 compute-0 sudo[207895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:15 compute-0 python3.9[207897]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:15 compute-0 sudo[207895]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:16 compute-0 sudo[208020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqfvqlfebrvbacoacahlepbwbgmpbywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031515.369163-1641-222666445165312/AnsiballZ_copy.py'
Feb 02 11:25:16 compute-0 sudo[208020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:25:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:16.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:25:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:16.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:16 compute-0 python3.9[208022]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770031515.369163-1641-222666445165312/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:16 compute-0 sudo[208020]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:16 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Scheduled restart job, restart counter is at 4.
Feb 02 11:25:16 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:25:16 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.504s CPU time.
Feb 02 11:25:16 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:25:16 compute-0 sudo[208180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snpeyjfyrbxgcgulncuvsdrfjkujaiqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031516.4869127-1641-272746419578049/AnsiballZ_stat.py'
Feb 02 11:25:16 compute-0 sudo[208180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:16 compute-0 podman[208221]: 2026-02-02 11:25:16.896732517 +0000 UTC m=+0.047173579 container create 09770d2aca4e2956b43b09f6ef9373e46a77f2ba1e7a0de8aca5e3173880959b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7c59c2e9a39a789efb873eaa53c6595448f35922aec8206d254cc1c2241308/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb 02 11:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7c59c2e9a39a789efb873eaa53c6595448f35922aec8206d254cc1c2241308/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7c59c2e9a39a789efb873eaa53c6595448f35922aec8206d254cc1c2241308/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7c59c2e9a39a789efb873eaa53c6595448f35922aec8206d254cc1c2241308/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.lrvhze-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:25:16 compute-0 python3.9[208187]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:16 compute-0 podman[208221]: 2026-02-02 11:25:16.950161936 +0000 UTC m=+0.100603018 container init 09770d2aca4e2956b43b09f6ef9373e46a77f2ba1e7a0de8aca5e3173880959b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:25:16 compute-0 podman[208221]: 2026-02-02 11:25:16.957010024 +0000 UTC m=+0.107451086 container start 09770d2aca4e2956b43b09f6ef9373e46a77f2ba1e7a0de8aca5e3173880959b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:25:16 compute-0 bash[208221]: 09770d2aca4e2956b43b09f6ef9373e46a77f2ba1e7a0de8aca5e3173880959b
Feb 02 11:25:16 compute-0 podman[208221]: 2026-02-02 11:25:16.876178155 +0000 UTC m=+0.026619237 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:25:16 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:25:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:16 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb 02 11:25:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:16 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb 02 11:25:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:16] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:25:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:16] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:25:17 compute-0 sudo[208180]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb 02 11:25:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb 02 11:25:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb 02 11:25:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb 02 11:25:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:25:17.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:25:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb 02 11:25:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:25:17 compute-0 ceph-mon[74676]: pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:17 compute-0 sudo[208400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvgxspzxsslhyimeisjsyezrkaxijtlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031516.4869127-1641-272746419578049/AnsiballZ_copy.py'
Feb 02 11:25:17 compute-0 sudo[208400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:17 compute-0 python3.9[208402]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770031516.4869127-1641-272746419578049/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:17 compute-0 sudo[208400]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:25:17 compute-0 sudo[208553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yycktoynjdwkcwocglihrqkfgimfhmit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031517.653124-1641-6555743238017/AnsiballZ_stat.py'
Feb 02 11:25:17 compute-0 sudo[208553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:18 compute-0 python3.9[208555]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:18 compute-0 sudo[208553]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:25:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:18.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:25:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:18.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:18 compute-0 sudo[208676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dojropjobffaudkvkckvhsgyqwweqtob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031517.653124-1641-6555743238017/AnsiballZ_copy.py'
Feb 02 11:25:18 compute-0 sudo[208676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:18 compute-0 python3.9[208678]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770031517.653124-1641-6555743238017/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:18 compute-0 sudo[208676]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:19 compute-0 ceph-mon[74676]: pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:25:19 compute-0 sudo[208830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyiwelcgvckwfpnylecapqernnblrgze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031518.9110124-1641-162405668112107/AnsiballZ_stat.py'
Feb 02 11:25:19 compute-0 sudo[208830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:25:19 compute-0 python3.9[208832]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:19 compute-0 sudo[208830]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:20.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:20.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:20 compute-0 sudo[208955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbbtbszqprrtpfbdfyvduilxicvdegqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031518.9110124-1641-162405668112107/AnsiballZ_copy.py'
Feb 02 11:25:20 compute-0 sudo[208955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:20 compute-0 python3.9[208957]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770031518.9110124-1641-162405668112107/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:20 compute-0 sudo[208955]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:21 compute-0 ceph-mon[74676]: pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:25:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:21 compute-0 podman[209058]: 2026-02-02 11:25:21.293659126 +0000 UTC m=+0.080412377 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:25:21 compute-0 sudo[209134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwxyorqbthtolranwjtfkrcsyzoktmex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031520.7848663-1980-165824225716398/AnsiballZ_command.py'
Feb 02 11:25:21 compute-0 sudo[209134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:21 compute-0 python3.9[209136]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb 02 11:25:21 compute-0 sudo[209134]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:25:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:22.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:22 compute-0 sudo[209306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idakrbcceguwzddcwrrwfrrzfmbaobby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031522.011752-2007-90458780446628/AnsiballZ_file.py'
Feb 02 11:25:22 compute-0 sudo[209306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:22.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:22 compute-0 podman[209256]: 2026-02-02 11:25:22.287878924 +0000 UTC m=+0.076830554 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:25:22 compute-0 python3.9[209308]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:22 compute-0 sudo[209306]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:25:22.658 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:25:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:25:22.658 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:25:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:25:22.659 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:25:22 compute-0 sudo[209459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzwnhnhlfydanxeetxqdjyscclghzdrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031522.6601074-2007-224589552003423/AnsiballZ_file.py'
Feb 02 11:25:22 compute-0 sudo[209459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:25:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:25:23 compute-0 sudo[209462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:25:23 compute-0 sudo[209462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:25:23 compute-0 sudo[209462]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:23 compute-0 python3.9[209461]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:23 compute-0 sudo[209459]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:23 compute-0 ceph-mon[74676]: pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:25:23 compute-0 sudo[209637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaxzeexaasbihbgieyxpfmbowipkvoab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031523.2842379-2007-119997992206391/AnsiballZ_file.py'
Feb 02 11:25:23 compute-0 sudo[209637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:23 compute-0 python3.9[209639]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:25:23 compute-0 sudo[209637]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:24.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:24 compute-0 sudo[209789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhogoxpyxgqysflsnwkrwoxfuvuwmkcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031523.940117-2007-230305500231836/AnsiballZ_file.py'
Feb 02 11:25:24 compute-0 sudo[209789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:24.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:24 compute-0 python3.9[209791]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:24 compute-0 sudo[209789]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:24 compute-0 ceph-mon[74676]: pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:25:24 compute-0 sudo[209942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxwgsfjzokivkuerbiigmcmdyvmokwqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031524.6230917-2007-202030237992004/AnsiballZ_file.py'
Feb 02 11:25:24 compute-0 sudo[209942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:25 compute-0 python3.9[209944]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:25 compute-0 sudo[209942]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:25 compute-0 sudo[210094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xllmsbohvpgmritrsiliwlheilgeiije ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031525.2415645-2007-217809891000287/AnsiballZ_file.py'
Feb 02 11:25:25 compute-0 sudo[210094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:25 compute-0 python3.9[210096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:25 compute-0 sudo[210094]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:25:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:25:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:25:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:25:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:26.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:25:26 compute-0 sudo[210247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpmhkbdqpcsaeygztlcrcybcobtgkxzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031525.8754337-2007-204405205493713/AnsiballZ_file.py'
Feb 02 11:25:26 compute-0 sudo[210247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:26 compute-0 python3.9[210249]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:26 compute-0 sudo[210247]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:26 compute-0 ceph-mon[74676]: pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:25:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:26] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:25:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:26] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:25:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:25:27.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:25:27 compute-0 sudo[210400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsxubupgydmgejsctbisupnvzwceqojz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031526.9454331-2007-138936911186444/AnsiballZ_file.py'
Feb 02 11:25:27 compute-0 sudo[210400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:27 compute-0 python3.9[210402]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:27 compute-0 sudo[210400]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:27 compute-0 sudo[210553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eljxkmhjglkatbinbbugwzjhgdxxjqgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031527.5415244-2007-68318964665386/AnsiballZ_file.py'
Feb 02 11:25:27 compute-0 sudo[210553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:25:27 compute-0 python3.9[210555]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:27 compute-0 sudo[210553]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:28.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:28.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:28 compute-0 sudo[210705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lirqlikgyqkqhjnlzwfgbxynsohvifmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031528.1335764-2007-194582474426894/AnsiballZ_file.py'
Feb 02 11:25:28 compute-0 sudo[210705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:28 compute-0 python3.9[210707]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:28 compute-0 sudo[210705]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:29 compute-0 sudo[210858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxohxtbztablfskyanqfogftabflhwjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031528.7905178-2007-2168454658626/AnsiballZ_file.py'
Feb 02 11:25:29 compute-0 sudo[210858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:29 compute-0 python3.9[210860]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:29 compute-0 ceph-mon[74676]: pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:25:29 compute-0 sudo[210858]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:25:29
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', '.mgr', 'volumes', 'images']
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:25:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:25:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:25:29 compute-0 sudo[211023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckbnyhmmhejkgjppnipyccrmlwwvtagb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031529.402335-2007-263833585913778/AnsiballZ_file.py'
Feb 02 11:25:29 compute-0 sudo[211023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:25:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:25:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:29 compute-0 python3.9[211025]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:29 compute-0 sudo[211023]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:30.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:25:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:30.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:30 compute-0 sudo[211179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtcbpzzejaktuaypnvaqngcbdkosuywf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031530.0222003-2007-278765524278507/AnsiballZ_file.py'
Feb 02 11:25:30 compute-0 sudo[211179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:30 compute-0 python3.9[211181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:30 compute-0 sudo[211179]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:30 compute-0 sudo[211332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgtwbfhljaevycifmwopaazsabtefigx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031530.700554-2007-250684543123304/AnsiballZ_file.py'
Feb 02 11:25:30 compute-0 sudo[211332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:31 compute-0 python3.9[211334]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:31 compute-0 sudo[211332]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:31 compute-0 ceph-mon[74676]: pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:25:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:31 compute-0 sudo[211485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muxxqofdlzarkiyagcoifulpyyfhhchc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031531.4920247-2304-219102429133526/AnsiballZ_stat.py'
Feb 02 11:25:31 compute-0 sudo[211485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:25:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:31 compute-0 python3.9[211487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:32 compute-0 sudo[211485]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:32.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:32.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:32 compute-0 sudo[211608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxiijbjrfjmasmdznexulssjrnydwpna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031531.4920247-2304-219102429133526/AnsiballZ_copy.py'
Feb 02 11:25:32 compute-0 sudo[211608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:32 compute-0 python3.9[211610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031531.4920247-2304-219102429133526/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:32 compute-0 sudo[211608]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:32 compute-0 ceph-mon[74676]: pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:25:32 compute-0 sudo[211761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gollgqipsmrojwljejadbjrafxnvmvfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031532.6404057-2304-216752324671348/AnsiballZ_stat.py'
Feb 02 11:25:32 compute-0 sudo[211761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112533 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:25:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:33 compute-0 python3.9[211763]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:33 compute-0 sudo[211761]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:33 compute-0 sudo[211884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kolcwipzyowshhdstnxeonpwzlsogtfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031532.6404057-2304-216752324671348/AnsiballZ_copy.py'
Feb 02 11:25:33 compute-0 sudo[211884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:33 compute-0 python3.9[211886]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031532.6404057-2304-216752324671348/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:33 compute-0 sudo[211884]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:25:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:34 compute-0 sudo[212037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drhfvdkrerryliutmdmytzwtrbxpsanq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031533.8491914-2304-4468605688606/AnsiballZ_stat.py'
Feb 02 11:25:34 compute-0 sudo[212037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:25:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:34.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:25:34 compute-0 python3.9[212039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:34 compute-0 sudo[212037]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:34.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:34 compute-0 sudo[212160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjcbdmgcucfprbfmmdlgbgshhqypozce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031533.8491914-2304-4468605688606/AnsiballZ_copy.py'
Feb 02 11:25:34 compute-0 sudo[212160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:34 compute-0 python3.9[212162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031533.8491914-2304-4468605688606/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:34 compute-0 ceph-mon[74676]: pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:25:34 compute-0 sudo[212160]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:35 compute-0 sudo[212313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkpwsjjbbbnvtlynzbbxvquukucxhyny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031534.9592257-2304-101788249071256/AnsiballZ_stat.py'
Feb 02 11:25:35 compute-0 sudo[212313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:35 compute-0 python3.9[212315]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:35 compute-0 sudo[212313]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:35 compute-0 sudo[212437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdfijykhovhemoloinpoaefrrjiezsjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031534.9592257-2304-101788249071256/AnsiballZ_copy.py'
Feb 02 11:25:35 compute-0 sudo[212437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:25:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:35 compute-0 python3.9[212439]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031534.9592257-2304-101788249071256/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:35 compute-0 sudo[212437]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:36.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:36.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:36 compute-0 sudo[212589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqmpxxwdvllqvvcastdoxqfitpuosdme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031536.0587661-2304-43206313008361/AnsiballZ_stat.py'
Feb 02 11:25:36 compute-0 sudo[212589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:36 compute-0 python3.9[212591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:36 compute-0 ceph-mon[74676]: pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:25:36 compute-0 sudo[212589]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:36] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Feb 02 11:25:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:36] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Feb 02 11:25:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:25:37.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:25:37 compute-0 sudo[212713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwmqszhhptojxqagmplnpztqkycqfhdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031536.0587661-2304-43206313008361/AnsiballZ_copy.py'
Feb 02 11:25:37 compute-0 sudo[212713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:37 compute-0 python3.9[212715]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031536.0587661-2304-43206313008361/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:37 compute-0 sudo[212713]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:37 compute-0 sudo[212866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xebjagoicqaicdixjcmueafusqisxuau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031537.4751256-2304-191744304342741/AnsiballZ_stat.py'
Feb 02 11:25:37 compute-0 sudo[212866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:25:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:37 compute-0 python3.9[212868]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:37 compute-0 sudo[212866]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:38 compute-0 sudo[212989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-disnejziyniisuihxdynepjtwdgkqbvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031537.4751256-2304-191744304342741/AnsiballZ_copy.py'
Feb 02 11:25:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:25:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:38.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:25:38 compute-0 sudo[212989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:38.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:38 compute-0 python3.9[212991]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031537.4751256-2304-191744304342741/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:38 compute-0 sudo[212989]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:38 compute-0 sudo[213141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrwqizcsnjzmowgpexfjaslrytkeojht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031538.5445578-2304-99061190868127/AnsiballZ_stat.py'
Feb 02 11:25:38 compute-0 sudo[213141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:38 compute-0 ceph-mon[74676]: pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:25:38 compute-0 python3.9[213143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:38 compute-0 sudo[213141]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:39 compute-0 sudo[213265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fidsizrkvecvflrpaezvwkjqthetdxzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031538.5445578-2304-99061190868127/AnsiballZ_copy.py'
Feb 02 11:25:39 compute-0 sudo[213265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:39 compute-0 python3.9[213267]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031538.5445578-2304-99061190868127/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:39 compute-0 sudo[213265]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:25:39 compute-0 sudo[213418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkrcpkxbtnktxqvuxyaoeiryjizcsqpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031539.5872967-2304-99064625305548/AnsiballZ_stat.py'
Feb 02 11:25:39 compute-0 sudo[213418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:40 compute-0 python3.9[213420]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:40 compute-0 sudo[213418]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:40.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:40.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:40 compute-0 sudo[213541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qustccxmqclafawbulyrqxrougdkploy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031539.5872967-2304-99064625305548/AnsiballZ_copy.py'
Feb 02 11:25:40 compute-0 sudo[213541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:40 compute-0 python3.9[213543]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031539.5872967-2304-99064625305548/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:40 compute-0 sudo[213541]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:40 compute-0 ceph-mon[74676]: pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:25:40 compute-0 sudo[213694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obdydcfbiqdobjmrqxpjoimsselluxzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031540.6948338-2304-35016634493706/AnsiballZ_stat.py'
Feb 02 11:25:40 compute-0 sudo[213694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:41 compute-0 python3.9[213696]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:41 compute-0 sudo[213694]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:41 compute-0 sudo[213818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmkxnvjorvntuuyvjkwxtbqmttvawgbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031540.6948338-2304-35016634493706/AnsiballZ_copy.py'
Feb 02 11:25:41 compute-0 sudo[213818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:41 compute-0 python3.9[213820]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031540.6948338-2304-35016634493706/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:25:41 compute-0 sudo[213818]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:42 compute-0 sudo[213970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjbquzqqpwwtdkiravwyoergicfhiili ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031541.8900168-2304-99132081747819/AnsiballZ_stat.py'
Feb 02 11:25:42 compute-0 sudo[213970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:42.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:42.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:42 compute-0 python3.9[213972]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:42 compute-0 sudo[213970]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:42 compute-0 sudo[214093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvqlppzzxpwuuegafpixzvjsdjqbuxuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031541.8900168-2304-99132081747819/AnsiballZ_copy.py'
Feb 02 11:25:42 compute-0 sudo[214093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:42 compute-0 python3.9[214095]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031541.8900168-2304-99132081747819/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:42 compute-0 sudo[214093]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:42 compute-0 ceph-mon[74676]: pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:25:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:43 compute-0 sudo[214269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cssjrvlmyjuvooaeekcigvndsrvbtxcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031542.938695-2304-50692933505545/AnsiballZ_stat.py'
Feb 02 11:25:43 compute-0 sudo[214224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:25:43 compute-0 sudo[214269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:43 compute-0 sudo[214224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:25:43 compute-0 sudo[214224]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:43 compute-0 python3.9[214272]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:43 compute-0 sudo[214269]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:43 compute-0 sudo[214395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irgbwsdxefxzobmuromxhkrltvcxwtvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031542.938695-2304-50692933505545/AnsiballZ_copy.py'
Feb 02 11:25:43 compute-0 sudo[214395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:43 compute-0 python3.9[214397]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031542.938695-2304-50692933505545/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:43 compute-0 sudo[214395]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:44.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:44 compute-0 sudo[214547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgibbubycgzdjhjwwfnjttixutzzuclt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031544.019614-2304-32965300319255/AnsiballZ_stat.py'
Feb 02 11:25:44 compute-0 sudo[214547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:25:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:44.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:25:44 compute-0 python3.9[214549]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:44 compute-0 sudo[214547]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:25:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:25:44 compute-0 sudo[214670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gehwblslcatyzywjwrmfkxqydlzyvdgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031544.019614-2304-32965300319255/AnsiballZ_copy.py'
Feb 02 11:25:44 compute-0 sudo[214670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:44 compute-0 ceph-mon[74676]: pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:25:44 compute-0 python3.9[214672]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031544.019614-2304-32965300319255/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:44 compute-0 sudo[214670]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:45 compute-0 sudo[214823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrxqhcmfxmlsuxkufdvlgfczvabgcupz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031545.0262153-2304-16798666990933/AnsiballZ_stat.py'
Feb 02 11:25:45 compute-0 sudo[214823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:45 compute-0 python3.9[214825]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:45 compute-0 sudo[214823]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:45 compute-0 sudo[214947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgisofcfkrzltgwwfzlvchehhimdnzuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031545.0262153-2304-16798666990933/AnsiballZ_copy.py'
Feb 02 11:25:45 compute-0 sudo[214947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:45 compute-0 python3.9[214949]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031545.0262153-2304-16798666990933/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:45 compute-0 sudo[214947]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:25:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:46.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:25:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:46.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:46 compute-0 sudo[215099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voxczgwufubtyvhhesrayrsutnmlropb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031546.3344114-2304-44963814109450/AnsiballZ_stat.py'
Feb 02 11:25:46 compute-0 sudo[215099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:46 compute-0 python3.9[215101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:25:46 compute-0 sudo[215099]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:46 compute-0 ceph-mon[74676]: pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:46] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:25:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:46] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:25:47 compute-0 sudo[215223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzjjmbbrsbzmmffoypqevahxikxyuwth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031546.3344114-2304-44963814109450/AnsiballZ_copy.py'
Feb 02 11:25:47 compute-0 sudo[215223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:25:47.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:25:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:25:47.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:25:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:25:47.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:25:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:47 compute-0 python3.9[215225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031546.3344114-2304-44963814109450/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:47 compute-0 sudo[215223]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:48.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:48 compute-0 python3.9[215376]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:25:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:48.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:48 compute-0 ceph-mon[74676]: pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:48 compute-0 sudo[215530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhvwxgvlyiyckeduxblhqbhpugbreepd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031548.5166023-2922-5445465254838/AnsiballZ_seboolean.py'
Feb 02 11:25:48 compute-0 sudo[215530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:49 compute-0 python3.9[215532]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb 02 11:25:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:50.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:50 compute-0 sudo[215530]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:50.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:50 compute-0 sudo[215687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxogmurqwcvyjnnvjnewwwubkyvuwrmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031550.4871101-2946-5698938910098/AnsiballZ_copy.py'
Feb 02 11:25:50 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb 02 11:25:50 compute-0 sudo[215687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:50 compute-0 python3.9[215689]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:50 compute-0 sudo[215687]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:50 compute-0 ceph-mon[74676]: pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:51 compute-0 sudo[215851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnnysxxatvfcfywfqqfabswjoiznutff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031551.318804-2946-250945586849544/AnsiballZ_copy.py'
Feb 02 11:25:51 compute-0 sudo[215851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:51 compute-0 podman[215814]: 2026-02-02 11:25:51.597548529 +0000 UTC m=+0.072880637 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Feb 02 11:25:51 compute-0 python3.9[215860]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:25:51 compute-0 sudo[215851]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:52 compute-0 sudo[216019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdegrxkgljmvpwignbnblhqxgrguudrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031551.9065132-2946-252336328319795/AnsiballZ_copy.py'
Feb 02 11:25:52 compute-0 sudo[216019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:25:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:52.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:25:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:52.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:52 compute-0 python3.9[216021]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:52 compute-0 sudo[216019]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:52 compute-0 sudo[216184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eweamfrjowbvxetrrcoqbecussfblxjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031552.5097034-2946-261989047133394/AnsiballZ_copy.py'
Feb 02 11:25:52 compute-0 sudo[216184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:52 compute-0 podman[216145]: 2026-02-02 11:25:52.767583959 +0000 UTC m=+0.048081893 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 11:25:52 compute-0 python3.9[216192]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:52 compute-0 ceph-mon[74676]: pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:25:52 compute-0 sudo[216184]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:53 compute-0 sudo[216343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxdayqxgfgnrlvcmxxqjizbdlhwbwigh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031553.0992796-2946-145670577346299/AnsiballZ_copy.py'
Feb 02 11:25:53 compute-0 sudo[216343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:53 compute-0 python3.9[216345]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:53 compute-0 sudo[216343]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:53 compute-0 sudo[216496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phldpfymxmfvougonvygpcflgnsybogo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031553.6735015-3054-148352842684363/AnsiballZ_copy.py'
Feb 02 11:25:53 compute-0 sudo[216496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:54 compute-0 python3.9[216498]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:54 compute-0 sudo[216496]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:54.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:54.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:54 compute-0 sudo[216648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffbfuzvequczkuupycvlvzazcwzttmax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031554.3018973-3054-157613535682577/AnsiballZ_copy.py'
Feb 02 11:25:54 compute-0 sudo[216648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:54 compute-0 python3.9[216650]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:54 compute-0 sudo[216648]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:54 compute-0 ceph-mon[74676]: pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:55 compute-0 sudo[216801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crbfvuenrugyrcrsnxwkmmjbnontvqhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031554.9095504-3054-109686962206363/AnsiballZ_copy.py'
Feb 02 11:25:55 compute-0 sudo[216801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:55 compute-0 python3.9[216803]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:55 compute-0 sudo[216801]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:55 compute-0 sudo[216954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itnbqlfdxkkcrhuedshusgvbhkfwvunq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031555.4944093-3054-161970289357714/AnsiballZ_copy.py'
Feb 02 11:25:55 compute-0 sudo[216954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:55 compute-0 python3.9[216956]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:55 compute-0 sudo[216954]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:25:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:56.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:25:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:25:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:56.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:56 compute-0 sudo[217106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcgrvbvwfbauzbxzluxmyhylkzctqrrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031556.1229124-3054-87776351906040/AnsiballZ_copy.py'
Feb 02 11:25:56 compute-0 sudo[217106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:56 compute-0 python3.9[217108]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:25:56 compute-0 sudo[217106]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:56] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:25:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:25:56] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:25:57 compute-0 ceph-mon[74676]: pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:25:57.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:25:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:57 compute-0 sudo[217259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypnircgpyhtpduauxutkimbknsvitpeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031556.8305469-3162-149003123540974/AnsiballZ_systemd.py'
Feb 02 11:25:57 compute-0 sudo[217259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:57 compute-0 python3.9[217261]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:25:57 compute-0 systemd[1]: Reloading.
Feb 02 11:25:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:57 compute-0 systemd-rc-local-generator[217287]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:25:57 compute-0 systemd-sysv-generator[217292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:25:57 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Feb 02 11:25:57 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Feb 02 11:25:57 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Feb 02 11:25:57 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb 02 11:25:57 compute-0 systemd[1]: Starting libvirt logging daemon...
Feb 02 11:25:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:57 compute-0 systemd[1]: Started libvirt logging daemon.
Feb 02 11:25:57 compute-0 sudo[217259]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:58 compute-0 sudo[217454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmrorpzupqcfptpbjkfjthrepgkzfeof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031557.9455976-3162-26773776580178/AnsiballZ_systemd.py'
Feb 02 11:25:58 compute-0 sudo[217454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:25:58.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:25:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:25:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:25:58.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:25:58 compute-0 python3.9[217456]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:25:58 compute-0 systemd[1]: Reloading.
Feb 02 11:25:58 compute-0 systemd-rc-local-generator[217481]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:25:58 compute-0 systemd-sysv-generator[217485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:25:58 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Feb 02 11:25:58 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb 02 11:25:58 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb 02 11:25:58 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb 02 11:25:58 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb 02 11:25:58 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb 02 11:25:58 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Feb 02 11:25:58 compute-0 systemd[1]: Started libvirt nodedev daemon.
Feb 02 11:25:58 compute-0 sudo[217454]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:59 compute-0 ceph-mon[74676]: pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:59 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb 02 11:25:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:59 compute-0 sudo[217555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:25:59 compute-0 sudo[217555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:25:59 compute-0 sudo[217555]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:59 compute-0 sudo[217616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:25:59 compute-0 sudo[217616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:25:59 compute-0 sudo[217722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tigvptisteisndftpawwcdrtwnnihasg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031559.0678194-3162-199878626826783/AnsiballZ_systemd.py'
Feb 02 11:25:59 compute-0 sudo[217722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:25:59 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb 02 11:25:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:25:59 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb 02 11:25:59 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb 02 11:25:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:25:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:25:59 compute-0 sudo[217616]: pam_unix(sudo:session): session closed for user root
Feb 02 11:25:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:25:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:25:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:25:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:25:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:25:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:25:59 compute-0 python3.9[217724]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:25:59 compute-0 systemd[1]: Reloading.
Feb 02 11:25:59 compute-0 systemd-sysv-generator[217790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:25:59 compute-0 systemd-rc-local-generator[217785]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:25:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:25:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:25:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:00 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb 02 11:26:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:26:00 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb 02 11:26:00 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb 02 11:26:00 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb 02 11:26:00 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 02 11:26:00 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 02 11:26:00 compute-0 sudo[217722]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:26:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:00.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:26:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:00.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:00 compute-0 setroubleshoot[217546]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f1a1cfa6-de7b-487d-a8b1-c748200f8d4b
Feb 02 11:26:00 compute-0 setroubleshoot[217546]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Feb 02 11:26:00 compute-0 setroubleshoot[217546]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f1a1cfa6-de7b-487d-a8b1-c748200f8d4b
Feb 02 11:26:00 compute-0 setroubleshoot[217546]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Feb 02 11:26:00 compute-0 sudo[217974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pskqnvatirmtrttkwfkmqfxjdvhgzhug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031560.2697628-3162-238675089979637/AnsiballZ_systemd.py'
Feb 02 11:26:00 compute-0 sudo[217974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:00 compute-0 python3.9[217976]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:26:00 compute-0 systemd[1]: Reloading.
Feb 02 11:26:00 compute-0 systemd-rc-local-generator[218000]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:26:00 compute-0 systemd-sysv-generator[218006]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:26:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:01 compute-0 ceph-mon[74676]: pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:01 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Feb 02 11:26:01 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Feb 02 11:26:01 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 02 11:26:01 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb 02 11:26:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb 02 11:26:01 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb 02 11:26:01 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb 02 11:26:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb 02 11:26:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb 02 11:26:01 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb 02 11:26:01 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Feb 02 11:26:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:01 compute-0 systemd[1]: Started libvirt QEMU daemon.
Feb 02 11:26:01 compute-0 sudo[217974]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:26:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:26:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:01 compute-0 sudo[218192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yukpllqpdgrgymielfeaztkwewfyvdan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031561.6596096-3162-47149902123614/AnsiballZ_systemd.py'
Feb 02 11:26:01 compute-0 sudo[218192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:26:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:26:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:26:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:26:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:26:02 compute-0 python3.9[218194]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:26:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:26:02 compute-0 systemd[1]: Reloading.
Feb 02 11:26:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:02.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:26:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:26:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:26:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:26:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:26:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:26:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:26:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:02.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:26:02 compute-0 systemd-rc-local-generator[218246]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:26:02 compute-0 systemd-sysv-generator[218249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:26:02 compute-0 sudo[218196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:26:02 compute-0 sudo[218196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:26:02 compute-0 sudo[218196]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:02 compute-0 ceph-mon[74676]: pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:26:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:26:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:26:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:26:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:26:02 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Feb 02 11:26:02 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Feb 02 11:26:02 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Feb 02 11:26:02 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb 02 11:26:02 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb 02 11:26:02 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb 02 11:26:02 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 02 11:26:02 compute-0 sudo[218257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:26:02 compute-0 sudo[218257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:26:02 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 02 11:26:02 compute-0 sudo[218192]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:03 compute-0 podman[218369]: 2026-02-02 11:26:03.046324565 +0000 UTC m=+0.046318931 container create 9a341ae3baf308c24e4bdc8f7d070369cc49fd57a6589603c8d5caa593fd922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:26:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:03 compute-0 systemd[1]: Started libpod-conmon-9a341ae3baf308c24e4bdc8f7d070369cc49fd57a6589603c8d5caa593fd922f.scope.
Feb 02 11:26:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:26:03 compute-0 podman[218369]: 2026-02-02 11:26:03.021093508 +0000 UTC m=+0.021087894 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:26:03 compute-0 podman[218369]: 2026-02-02 11:26:03.131729662 +0000 UTC m=+0.131724058 container init 9a341ae3baf308c24e4bdc8f7d070369cc49fd57a6589603c8d5caa593fd922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lalande, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb 02 11:26:03 compute-0 podman[218369]: 2026-02-02 11:26:03.139360928 +0000 UTC m=+0.139355294 container start 9a341ae3baf308c24e4bdc8f7d070369cc49fd57a6589603c8d5caa593fd922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:26:03 compute-0 unruffled_lalande[218438]: 167 167
Feb 02 11:26:03 compute-0 systemd[1]: libpod-9a341ae3baf308c24e4bdc8f7d070369cc49fd57a6589603c8d5caa593fd922f.scope: Deactivated successfully.
Feb 02 11:26:03 compute-0 podman[218369]: 2026-02-02 11:26:03.146150339 +0000 UTC m=+0.146144905 container attach 9a341ae3baf308c24e4bdc8f7d070369cc49fd57a6589603c8d5caa593fd922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lalande, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:26:03 compute-0 podman[218369]: 2026-02-02 11:26:03.148039105 +0000 UTC m=+0.148033471 container died 9a341ae3baf308c24e4bdc8f7d070369cc49fd57a6589603c8d5caa593fd922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lalande, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:26:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3394f94b0bce29a82df400cb3a005b10737fbaff3516f7c936844efa08600592-merged.mount: Deactivated successfully.
Feb 02 11:26:03 compute-0 podman[218369]: 2026-02-02 11:26:03.208201555 +0000 UTC m=+0.208195911 container remove 9a341ae3baf308c24e4bdc8f7d070369cc49fd57a6589603c8d5caa593fd922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:26:03 compute-0 systemd[1]: libpod-conmon-9a341ae3baf308c24e4bdc8f7d070369cc49fd57a6589603c8d5caa593fd922f.scope: Deactivated successfully.
Feb 02 11:26:03 compute-0 sudo[218495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:26:03 compute-0 sudo[218495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:26:03 compute-0 sudo[218495]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:03 compute-0 sudo[218554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkvaqbxhxdwqwcfhshlwzvoiwjcvmajp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031563.024527-3273-87110727717490/AnsiballZ_file.py'
Feb 02 11:26:03 compute-0 sudo[218554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:03 compute-0 podman[218562]: 2026-02-02 11:26:03.357778701 +0000 UTC m=+0.046817987 container create 917c0d2ec57e362c31a872fa52981491e08c471e6ee385fb6ebe9afca9803932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mclean, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Feb 02 11:26:03 compute-0 systemd[1]: Started libpod-conmon-917c0d2ec57e362c31a872fa52981491e08c471e6ee385fb6ebe9afca9803932.scope.
Feb 02 11:26:03 compute-0 podman[218562]: 2026-02-02 11:26:03.337056898 +0000 UTC m=+0.026096204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:26:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c53f38cb102170fba2921692f088374e969e7a63e09133ce5c24bc5dc222a9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c53f38cb102170fba2921692f088374e969e7a63e09133ce5c24bc5dc222a9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c53f38cb102170fba2921692f088374e969e7a63e09133ce5c24bc5dc222a9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c53f38cb102170fba2921692f088374e969e7a63e09133ce5c24bc5dc222a9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c53f38cb102170fba2921692f088374e969e7a63e09133ce5c24bc5dc222a9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:03 compute-0 podman[218562]: 2026-02-02 11:26:03.468762355 +0000 UTC m=+0.157801671 container init 917c0d2ec57e362c31a872fa52981491e08c471e6ee385fb6ebe9afca9803932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:26:03 compute-0 podman[218562]: 2026-02-02 11:26:03.47976572 +0000 UTC m=+0.168805006 container start 917c0d2ec57e362c31a872fa52981491e08c471e6ee385fb6ebe9afca9803932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb 02 11:26:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:03 compute-0 podman[218562]: 2026-02-02 11:26:03.483860901 +0000 UTC m=+0.172900217 container attach 917c0d2ec57e362c31a872fa52981491e08c471e6ee385fb6ebe9afca9803932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:26:03 compute-0 python3.9[218561]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:03 compute-0 sudo[218554]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:03 compute-0 stupefied_mclean[218578]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:26:03 compute-0 stupefied_mclean[218578]: --> All data devices are unavailable
Feb 02 11:26:03 compute-0 systemd[1]: libpod-917c0d2ec57e362c31a872fa52981491e08c471e6ee385fb6ebe9afca9803932.scope: Deactivated successfully.
Feb 02 11:26:03 compute-0 podman[218562]: 2026-02-02 11:26:03.852922152 +0000 UTC m=+0.541961458 container died 917c0d2ec57e362c31a872fa52981491e08c471e6ee385fb6ebe9afca9803932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:26:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c53f38cb102170fba2921692f088374e969e7a63e09133ce5c24bc5dc222a9d-merged.mount: Deactivated successfully.
Feb 02 11:26:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:03 compute-0 podman[218562]: 2026-02-02 11:26:03.919859102 +0000 UTC m=+0.608898388 container remove 917c0d2ec57e362c31a872fa52981491e08c471e6ee385fb6ebe9afca9803932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:26:03 compute-0 systemd[1]: libpod-conmon-917c0d2ec57e362c31a872fa52981491e08c471e6ee385fb6ebe9afca9803932.scope: Deactivated successfully.
Feb 02 11:26:03 compute-0 sudo[218257]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:03 compute-0 sudo[218756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acspctqbwbyqqwivhlaqbkcwfsmqjeec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031563.713233-3297-276971052614890/AnsiballZ_find.py'
Feb 02 11:26:03 compute-0 sudo[218756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:04 compute-0 sudo[218759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:26:04 compute-0 sudo[218759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:26:04 compute-0 sudo[218759]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:04 compute-0 sudo[218784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:26:04 compute-0 sudo[218784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:26:04 compute-0 python3.9[218758]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 11:26:04 compute-0 sudo[218756]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:04.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:04.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:04 compute-0 podman[218897]: 2026-02-02 11:26:04.450998798 +0000 UTC m=+0.039242972 container create 4120e386d2888ff7f29ac03b465e265753ef2172c29312ccd4f3e393f65e6d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:26:04 compute-0 systemd[1]: Started libpod-conmon-4120e386d2888ff7f29ac03b465e265753ef2172c29312ccd4f3e393f65e6d93.scope.
Feb 02 11:26:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:26:04 compute-0 podman[218897]: 2026-02-02 11:26:04.43349174 +0000 UTC m=+0.021735934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:26:04 compute-0 podman[218897]: 2026-02-02 11:26:04.541040192 +0000 UTC m=+0.129284406 container init 4120e386d2888ff7f29ac03b465e265753ef2172c29312ccd4f3e393f65e6d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bouman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:26:04 compute-0 podman[218897]: 2026-02-02 11:26:04.547654768 +0000 UTC m=+0.135898952 container start 4120e386d2888ff7f29ac03b465e265753ef2172c29312ccd4f3e393f65e6d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bouman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Feb 02 11:26:04 compute-0 podman[218897]: 2026-02-02 11:26:04.551134261 +0000 UTC m=+0.139378435 container attach 4120e386d2888ff7f29ac03b465e265753ef2172c29312ccd4f3e393f65e6d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bouman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:26:04 compute-0 suspicious_bouman[218959]: 167 167
Feb 02 11:26:04 compute-0 systemd[1]: libpod-4120e386d2888ff7f29ac03b465e265753ef2172c29312ccd4f3e393f65e6d93.scope: Deactivated successfully.
Feb 02 11:26:04 compute-0 podman[218897]: 2026-02-02 11:26:04.55412223 +0000 UTC m=+0.142366404 container died 4120e386d2888ff7f29ac03b465e265753ef2172c29312ccd4f3e393f65e6d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 11:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3de5aaebebbbb6da9ebd4dc38a7f9e376a30539735be9f1f8d39e602d8bc96ca-merged.mount: Deactivated successfully.
Feb 02 11:26:04 compute-0 podman[218897]: 2026-02-02 11:26:04.589413794 +0000 UTC m=+0.177657968 container remove 4120e386d2888ff7f29ac03b465e265753ef2172c29312ccd4f3e393f65e6d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Feb 02 11:26:04 compute-0 systemd[1]: libpod-conmon-4120e386d2888ff7f29ac03b465e265753ef2172c29312ccd4f3e393f65e6d93.scope: Deactivated successfully.
Feb 02 11:26:04 compute-0 sudo[219032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wppuxswylvzewwvaehituvsftkunurwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031564.3899324-3321-9712309249674/AnsiballZ_command.py'
Feb 02 11:26:04 compute-0 sudo[219032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:04 compute-0 podman[219040]: 2026-02-02 11:26:04.739166655 +0000 UTC m=+0.043128287 container create ee99aa680328f9fdbdca7bf40b3d1ec162474e9776b1dbcea4fc22b880317f08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:26:04 compute-0 systemd[1]: Started libpod-conmon-ee99aa680328f9fdbdca7bf40b3d1ec162474e9776b1dbcea4fc22b880317f08.scope.
Feb 02 11:26:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed461a1f2b85569a2cc9b5fa1a7c54c83df43b9cb162430a522aefbd3bf1e67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed461a1f2b85569a2cc9b5fa1a7c54c83df43b9cb162430a522aefbd3bf1e67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed461a1f2b85569a2cc9b5fa1a7c54c83df43b9cb162430a522aefbd3bf1e67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed461a1f2b85569a2cc9b5fa1a7c54c83df43b9cb162430a522aefbd3bf1e67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:04 compute-0 podman[219040]: 2026-02-02 11:26:04.721827722 +0000 UTC m=+0.025789384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:26:04 compute-0 podman[219040]: 2026-02-02 11:26:04.832221058 +0000 UTC m=+0.136182721 container init ee99aa680328f9fdbdca7bf40b3d1ec162474e9776b1dbcea4fc22b880317f08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_driscoll, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:26:04 compute-0 podman[219040]: 2026-02-02 11:26:04.840228255 +0000 UTC m=+0.144189887 container start ee99aa680328f9fdbdca7bf40b3d1ec162474e9776b1dbcea4fc22b880317f08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:26:04 compute-0 podman[219040]: 2026-02-02 11:26:04.850599162 +0000 UTC m=+0.154560814 container attach ee99aa680328f9fdbdca7bf40b3d1ec162474e9776b1dbcea4fc22b880317f08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:26:04 compute-0 ceph-mon[74676]: pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:04 compute-0 python3.9[219037]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:26:04 compute-0 sudo[219032]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]: {
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:     "1": [
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:         {
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "devices": [
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "/dev/loop3"
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             ],
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "lv_name": "ceph_lv0",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "lv_size": "21470642176",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "name": "ceph_lv0",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "tags": {
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.cluster_name": "ceph",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.crush_device_class": "",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.encrypted": "0",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.osd_id": "1",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.type": "block",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.vdo": "0",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:                 "ceph.with_tpm": "0"
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             },
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "type": "block",
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:             "vg_name": "ceph_vg0"
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:         }
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]:     ]
Feb 02 11:26:05 compute-0 friendly_driscoll[219056]: }
Feb 02 11:26:05 compute-0 systemd[1]: libpod-ee99aa680328f9fdbdca7bf40b3d1ec162474e9776b1dbcea4fc22b880317f08.scope: Deactivated successfully.
Feb 02 11:26:05 compute-0 podman[219040]: 2026-02-02 11:26:05.131832044 +0000 UTC m=+0.435793676 container died ee99aa680328f9fdbdca7bf40b3d1ec162474e9776b1dbcea4fc22b880317f08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 11:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ed461a1f2b85569a2cc9b5fa1a7c54c83df43b9cb162430a522aefbd3bf1e67-merged.mount: Deactivated successfully.
Feb 02 11:26:05 compute-0 podman[219040]: 2026-02-02 11:26:05.178550117 +0000 UTC m=+0.482511749 container remove ee99aa680328f9fdbdca7bf40b3d1ec162474e9776b1dbcea4fc22b880317f08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_driscoll, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:26:05 compute-0 systemd[1]: libpod-conmon-ee99aa680328f9fdbdca7bf40b3d1ec162474e9776b1dbcea4fc22b880317f08.scope: Deactivated successfully.
Feb 02 11:26:05 compute-0 sudo[218784]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:05 compute-0 sudo[219143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:26:05 compute-0 sudo[219143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:26:05 compute-0 sudo[219143]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:05 compute-0 sudo[219188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:26:05 compute-0 sudo[219188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:26:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:05 compute-0 python3.9[219282]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 11:26:05 compute-0 podman[219325]: 2026-02-02 11:26:05.716713899 +0000 UTC m=+0.036017347 container create 9adc95686cb1c634c7323f123c6a29f4034de7c687e084daddf0e492f8211e79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:26:05 compute-0 systemd[1]: Started libpod-conmon-9adc95686cb1c634c7323f123c6a29f4034de7c687e084daddf0e492f8211e79.scope.
Feb 02 11:26:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:26:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:05 compute-0 podman[219325]: 2026-02-02 11:26:05.794605464 +0000 UTC m=+0.113908922 container init 9adc95686cb1c634c7323f123c6a29f4034de7c687e084daddf0e492f8211e79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_euclid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:26:05 compute-0 podman[219325]: 2026-02-02 11:26:05.701732926 +0000 UTC m=+0.021036404 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:26:05 compute-0 podman[219325]: 2026-02-02 11:26:05.802819097 +0000 UTC m=+0.122122545 container start 9adc95686cb1c634c7323f123c6a29f4034de7c687e084daddf0e492f8211e79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_euclid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:26:05 compute-0 podman[219325]: 2026-02-02 11:26:05.807280579 +0000 UTC m=+0.126584027 container attach 9adc95686cb1c634c7323f123c6a29f4034de7c687e084daddf0e492f8211e79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_euclid, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:26:05 compute-0 jovial_euclid[219364]: 167 167
Feb 02 11:26:05 compute-0 systemd[1]: libpod-9adc95686cb1c634c7323f123c6a29f4034de7c687e084daddf0e492f8211e79.scope: Deactivated successfully.
Feb 02 11:26:05 compute-0 podman[219325]: 2026-02-02 11:26:05.808596648 +0000 UTC m=+0.127900116 container died 9adc95686cb1c634c7323f123c6a29f4034de7c687e084daddf0e492f8211e79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-01563a28f8bbbf1f81304fe46a8b9bf36cc1992780211e56fb469a0cc7c24c31-merged.mount: Deactivated successfully.
Feb 02 11:26:05 compute-0 podman[219325]: 2026-02-02 11:26:05.843700217 +0000 UTC m=+0.163003665 container remove 9adc95686cb1c634c7323f123c6a29f4034de7c687e084daddf0e492f8211e79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_euclid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:26:05 compute-0 systemd[1]: libpod-conmon-9adc95686cb1c634c7323f123c6a29f4034de7c687e084daddf0e492f8211e79.scope: Deactivated successfully.
Feb 02 11:26:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:05 compute-0 podman[219389]: 2026-02-02 11:26:05.981160624 +0000 UTC m=+0.039244572 container create 71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galileo, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:26:06 compute-0 systemd[1]: Started libpod-conmon-71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3.scope.
Feb 02 11:26:06 compute-0 podman[219389]: 2026-02-02 11:26:05.964693257 +0000 UTC m=+0.022777225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:26:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1d3da75b4914aa148c1ba69887546c0759959066d113756ce6c6093f82cf9ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1d3da75b4914aa148c1ba69887546c0759959066d113756ce6c6093f82cf9ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1d3da75b4914aa148c1ba69887546c0759959066d113756ce6c6093f82cf9ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1d3da75b4914aa148c1ba69887546c0759959066d113756ce6c6093f82cf9ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:26:06 compute-0 podman[219389]: 2026-02-02 11:26:06.087080218 +0000 UTC m=+0.145164166 container init 71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galileo, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:26:06 compute-0 podman[219389]: 2026-02-02 11:26:06.096188018 +0000 UTC m=+0.154271966 container start 71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galileo, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:26:06 compute-0 podman[219389]: 2026-02-02 11:26:06.101412842 +0000 UTC m=+0.159496810 container attach 71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:26:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:06.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:06.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:06 compute-0 lvm[219606]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:26:06 compute-0 lvm[219606]: VG ceph_vg0 finished
Feb 02 11:26:06 compute-0 python3.9[219583]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:06 compute-0 musing_galileo[219406]: {}
Feb 02 11:26:06 compute-0 systemd[1]: libpod-71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3.scope: Deactivated successfully.
Feb 02 11:26:06 compute-0 systemd[1]: libpod-71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3.scope: Consumed 1.096s CPU time.
Feb 02 11:26:06 compute-0 conmon[219406]: conmon 71798fbf6923262b85e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3.scope/container/memory.events
Feb 02 11:26:06 compute-0 podman[219389]: 2026-02-02 11:26:06.833213626 +0000 UTC m=+0.891297574 container died 71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb 02 11:26:06 compute-0 ceph-mon[74676]: pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1d3da75b4914aa148c1ba69887546c0759959066d113756ce6c6093f82cf9ed-merged.mount: Deactivated successfully.
Feb 02 11:26:06 compute-0 podman[219389]: 2026-02-02 11:26:06.880835935 +0000 UTC m=+0.938919883 container remove 71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:26:06 compute-0 systemd[1]: libpod-conmon-71798fbf6923262b85e2916ea08e656c59d2eb2230db0868e724451406ca60f3.scope: Deactivated successfully.
Feb 02 11:26:06 compute-0 sudo[219188]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:26:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:26:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:06] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Feb 02 11:26:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:06] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Feb 02 11:26:07 compute-0 sudo[219704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:26:07 compute-0 sudo[219704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:26:07 compute-0 sudo[219704]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:26:07.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:26:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:07 compute-0 python3.9[219767]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031566.0911088-3378-192278504146777/.source.xml follow=False _original_basename=secret.xml.j2 checksum=768f4478e7238e8af1b1a105a7bc90a7f197a516 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:07 compute-0 sudo[219918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zexnkyauzdpbymcxifleeixsbtrqmrbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031567.432751-3423-13970833156200/AnsiballZ_command.py'
Feb 02 11:26:07 compute-0 sudo[219918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:07 compute-0 python3.9[219920]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 1d33f80b-d6ca-501c-bac7-184379b89279
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:26:07 compute-0 polkitd[43551]: Registered Authentication Agent for unix-process:219922:322218 (system bus name :1.2839 [pkttyagent --process 219922 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Feb 02 11:26:07 compute-0 polkitd[43551]: Unregistered Authentication Agent for unix-process:219922:322218 (system bus name :1.2839, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Feb 02 11:26:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:07 compute-0 polkitd[43551]: Registered Authentication Agent for unix-process:219921:322218 (system bus name :1.2840 [pkttyagent --process 219921 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Feb 02 11:26:07 compute-0 polkitd[43551]: Unregistered Authentication Agent for unix-process:219921:322218 (system bus name :1.2840, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Feb 02 11:26:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:26:07 compute-0 sudo[219918]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:08.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:08.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:08 compute-0 python3.9[220082]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:08 compute-0 sudo[220233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fildxwwdcxscmuvxlpheohyfijuncplt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031568.745425-3471-90840839600117/AnsiballZ_command.py'
Feb 02 11:26:08 compute-0 sudo[220233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:08 compute-0 ceph-mon[74676]: pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:09 compute-0 sudo[220233]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:09 compute-0 sudo[220387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvjlgllweunmnirmwpfqsylyefgbnzur ; FSID=1d33f80b-d6ca-501c-bac7-184379b89279 KEY=AQDlhYBpAAAAABAAVlWxpfi06TnsRXPWuiAnKA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031569.3961725-3495-274518891826997/AnsiballZ_command.py'
Feb 02 11:26:09 compute-0 sudo[220387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:09 compute-0 polkitd[43551]: Registered Authentication Agent for unix-process:220390:322411 (system bus name :1.2843 [pkttyagent --process 220390 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Feb 02 11:26:09 compute-0 polkitd[43551]: Unregistered Authentication Agent for unix-process:220390:322411 (system bus name :1.2843, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Feb 02 11:26:09 compute-0 sudo[220387]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:10.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:10 compute-0 sudo[220545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrnripkskjaqvhtnnxpiifwsnejluxfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031570.0360954-3519-18823852132266/AnsiballZ_copy.py'
Feb 02 11:26:10 compute-0 sudo[220545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:10.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:10 compute-0 python3.9[220547]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:10 compute-0 sudo[220545]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:10 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb 02 11:26:10 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.027s CPU time.
Feb 02 11:26:10 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb 02 11:26:10 compute-0 sudo[220698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwixftgwevlgblkerjpwooozzpabdbou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031570.6398857-3543-225437589127762/AnsiballZ_stat.py'
Feb 02 11:26:10 compute-0 sudo[220698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:11 compute-0 ceph-mon[74676]: pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:11 compute-0 python3.9[220700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:11 compute-0 sudo[220698]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:11 compute-0 sudo[220822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpubjdkjeddcwlzxjqsgoxiyfdqhzktf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031570.6398857-3543-225437589127762/AnsiballZ_copy.py'
Feb 02 11:26:11 compute-0 sudo[220822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:11 compute-0 python3.9[220824]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031570.6398857-3543-225437589127762/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:11 compute-0 sudo[220822]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:12.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:26:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:12.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:26:12 compute-0 sudo[220974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jazrygckwdpuapdkptecbzweblvsozcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031572.104339-3591-171302495017259/AnsiballZ_file.py'
Feb 02 11:26:12 compute-0 sudo[220974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:12 compute-0 python3.9[220976]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:12 compute-0 sudo[220974]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:13 compute-0 sudo[221127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaokqaarxpbwhdcklobccugnubvhneak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031572.7542303-3615-53691183987455/AnsiballZ_stat.py'
Feb 02 11:26:13 compute-0 sudo[221127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:13 compute-0 ceph-mon[74676]: pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:13 compute-0 python3.9[221129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:13 compute-0 sudo[221127]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:13 compute-0 sudo[221205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdgzrjmyqzlabqsaqkscmmdulpdvivdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031572.7542303-3615-53691183987455/AnsiballZ_file.py'
Feb 02 11:26:13 compute-0 sudo[221205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:13 compute-0 python3.9[221207]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:13 compute-0 sudo[221205]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:14 compute-0 sudo[221358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sohiwnlufdiitrrrmndrzoytpdgkwkyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031573.808138-3651-243307732865098/AnsiballZ_stat.py'
Feb 02 11:26:14 compute-0 sudo[221358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:14 compute-0 python3.9[221360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:14.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:14 compute-0 sudo[221358]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:14.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:14 compute-0 sudo[221436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxuvwmpuszmpbilusyrhvicsbmkldeyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031573.808138-3651-243307732865098/AnsiballZ_file.py'
Feb 02 11:26:14 compute-0 sudo[221436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:26:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:26:14 compute-0 python3.9[221438]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.488n1dcb recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:14 compute-0 sudo[221436]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:15 compute-0 ceph-mon[74676]: pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:26:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:15 compute-0 sudo[221589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkfwckfxvauieqqrzwenkjognusvdfnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031574.8265955-3687-203477534850832/AnsiballZ_stat.py'
Feb 02 11:26:15 compute-0 sudo[221589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:15 compute-0 python3.9[221591]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:15 compute-0 sudo[221589]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:15 compute-0 sudo[221667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obokdevmdeirkjyofohrynreexvdbpfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031574.8265955-3687-203477534850832/AnsiballZ_file.py'
Feb 02 11:26:15 compute-0 sudo[221667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:15 compute-0 python3.9[221669]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:15 compute-0 sudo[221667]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:16.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:16.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:16 compute-0 sudo[221820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwscjxtcyebrrmstmklvblgdxosgaswb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031576.1984737-3726-41000000642661/AnsiballZ_command.py'
Feb 02 11:26:16 compute-0 sudo[221820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:16 compute-0 python3.9[221822]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:26:16 compute-0 sudo[221820]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:16] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Feb 02 11:26:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:16] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Feb 02 11:26:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:26:17.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:26:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428009630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:17 compute-0 ceph-mon[74676]: pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:17 compute-0 sudo[221974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zztzuegqahfraggbqzqdryjevfrqdkvn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770031576.8490653-3750-154625168513109/AnsiballZ_edpm_nftables_from_files.py'
Feb 02 11:26:17 compute-0 sudo[221974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:17 compute-0 python3[221976]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 02 11:26:17 compute-0 sudo[221974]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:17 compute-0 sudo[222127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfrtozbgtivkwgufhtmwxcczfvxanhka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031577.6102147-3774-266012291778287/AnsiballZ_stat.py'
Feb 02 11:26:17 compute-0 sudo[222127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:18 compute-0 python3.9[222129]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:18 compute-0 sudo[222127]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:26:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:18.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:26:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:18.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:18 compute-0 sudo[222205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcvtmcpbaxdwzfuohwrslsnmgbnxjdse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031577.6102147-3774-266012291778287/AnsiballZ_file.py'
Feb 02 11:26:18 compute-0 sudo[222205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:18 compute-0 python3.9[222207]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:18 compute-0 sudo[222205]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:19 compute-0 sudo[222358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcclewdwagoqqibzbzqdkccmfpttabzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031578.7327812-3810-210383122516627/AnsiballZ_stat.py'
Feb 02 11:26:19 compute-0 sudo[222358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:19 compute-0 ceph-mon[74676]: pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:19 compute-0 python3.9[222360]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:19 compute-0 sudo[222358]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428009630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:19 compute-0 sudo[222484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgdwnpolwjerinbxbicdylkgjiqctmlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031578.7327812-3810-210383122516627/AnsiballZ_copy.py'
Feb 02 11:26:19 compute-0 sudo[222484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:19 compute-0 python3.9[222486]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031578.7327812-3810-210383122516627/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:19 compute-0 sudo[222484]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:20 compute-0 sudo[222636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svaelcjvrasbhdxynmrfntjxcfucpphv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031579.8939624-3855-154605472297293/AnsiballZ_stat.py'
Feb 02 11:26:20 compute-0 sudo[222636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:26:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:20.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:26:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:20.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:20 compute-0 python3.9[222638]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:20 compute-0 sudo[222636]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:20 compute-0 sudo[222714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrutjnmzybnwwxuwfihzweshvkrjvezd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031579.8939624-3855-154605472297293/AnsiballZ_file.py'
Feb 02 11:26:20 compute-0 sudo[222714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:20 compute-0 python3.9[222716]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:20 compute-0 sudo[222714]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:21 compute-0 ceph-mon[74676]: pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:21 compute-0 sudo[222867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umtqxjbudwexotdqohnrzkwohtactppa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031581.2613413-3891-281435949001892/AnsiballZ_stat.py'
Feb 02 11:26:21 compute-0 sudo[222867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:21 compute-0 python3.9[222870]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:21 compute-0 sudo[222867]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428009630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:21 compute-0 sudo[222959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbwszqcyubgmsvjoduucynmmxsphhymr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031581.2613413-3891-281435949001892/AnsiballZ_file.py'
Feb 02 11:26:21 compute-0 sudo[222959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:22 compute-0 podman[222920]: 2026-02-02 11:26:22.06712276 +0000 UTC m=+0.116067275 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:26:22 compute-0 python3.9[222967]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:22 compute-0 sudo[222959]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:22.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:26:22.659 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:26:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:26:22.659 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:26:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:26:22.659 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:26:22 compute-0 sudo[223125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbenpdyhqdokwjwmyqndzmtvfuahezeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031582.3798683-3927-228903965024032/AnsiballZ_stat.py'
Feb 02 11:26:22 compute-0 sudo[223125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:22 compute-0 python3.9[223127]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:22 compute-0 sudo[223125]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:23 compute-0 ceph-mon[74676]: pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:23 compute-0 sudo[223266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abwhsuzatttodgrxxflnrlflurgmqybs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031582.3798683-3927-228903965024032/AnsiballZ_copy.py'
Feb 02 11:26:23 compute-0 sudo[223266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:23 compute-0 podman[223225]: 2026-02-02 11:26:23.288511519 +0000 UTC m=+0.078016919 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 02 11:26:23 compute-0 sudo[223273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:26:23 compute-0 sudo[223273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:26:23 compute-0 sudo[223273]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:23 compute-0 python3.9[223272]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031582.3798683-3927-228903965024032/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:23 compute-0 sudo[223266]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:23 compute-0 sudo[223448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlxhmsitxbdofhlnvvxuznngdjbfpmee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031583.625095-3972-138577528004534/AnsiballZ_file.py'
Feb 02 11:26:23 compute-0 sudo[223448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:24 compute-0 python3.9[223450]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:24 compute-0 sudo[223448]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:26:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:24.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:26:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:24.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:24 compute-0 sudo[223600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lijqyhitjyuxwjzfxbpolsnkhgaqjvdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031584.2267177-3996-262762117854393/AnsiballZ_command.py'
Feb 02 11:26:24 compute-0 sudo[223600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:24 compute-0 python3.9[223602]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:26:24 compute-0 sudo[223600]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:25 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:25 compute-0 ceph-mon[74676]: pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:25 compute-0 sudo[223756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dllaadoxuwlvtemzeiizhpopgdvqrkqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031584.8349319-4020-85613921184515/AnsiballZ_blockinfile.py'
Feb 02 11:26:25 compute-0 sudo[223756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:25 compute-0 python3.9[223758]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:25 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:25 compute-0 sudo[223756]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:25 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:25 compute-0 sudo[223909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucjthfnpelxykjruijwidsjbiizmjefp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031585.7544692-4047-20504459931266/AnsiballZ_command.py'
Feb 02 11:26:25 compute-0 sudo[223909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:26 compute-0 python3.9[223911]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:26:26 compute-0 sudo[223909]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:26.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:26.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:26] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Feb 02 11:26:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:26] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Feb 02 11:26:27 compute-0 sudo[224063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmnljzpxhzlyqesiagjxoyylfwnewutt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031586.7608047-4071-56327108925636/AnsiballZ_stat.py'
Feb 02 11:26:27 compute-0 sudo[224063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:26:27.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:26:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:26:27.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:26:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:26:27.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:26:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:27 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:27 compute-0 python3.9[224065]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:26:27 compute-0 ceph-mon[74676]: pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:27 compute-0 sudo[224063]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:27 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:27 compute-0 sudo[224218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayghbsnjbmhihurzomfunofthfbnoumm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031587.3789046-4095-157313906840406/AnsiballZ_command.py'
Feb 02 11:26:27 compute-0 sudo[224218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:27 compute-0 python3.9[224220]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:26:27 compute-0 sudo[224218]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:27 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:28 compute-0 sudo[224373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cscziwsoepqrlyxovngeuyrubednjnez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031588.004579-4119-50089668753498/AnsiballZ_file.py'
Feb 02 11:26:28 compute-0 sudo[224373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:26:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:28.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:26:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:28.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:28 compute-0 python3.9[224375]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:28 compute-0 sudo[224373]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:28 compute-0 sudo[224526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbuvrszchmsfckgqnmltmrtoaudeyfha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031588.7383132-4143-22452394559481/AnsiballZ_stat.py'
Feb 02 11:26:28 compute-0 sudo[224526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:29 compute-0 python3.9[224528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:29 compute-0 sudo[224526]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:29 compute-0 ceph-mon[74676]: pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:29 compute-0 sudo[224649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsjusmxhdrwhocbuwpsajupymbxnoups ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031588.7383132-4143-22452394559481/AnsiballZ_copy.py'
Feb 02 11:26:29 compute-0 sudo[224649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:26:29
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'images', '.mgr', 'vms', 'default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.meta', '.nfs']
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:26:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:26:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:26:29 compute-0 python3.9[224651]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031588.7383132-4143-22452394559481/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:29 compute-0 sudo[224649]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:26:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:30 compute-0 sudo[224802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkbahezdpeqokchmgfnkmdtrzfaeikee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031589.8242493-4188-60786190353995/AnsiballZ_stat.py'
Feb 02 11:26:30 compute-0 sudo[224802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:30 compute-0 python3.9[224804]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:26:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:26:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:30.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:26:30 compute-0 sudo[224802]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:26:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:30.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:26:30 compute-0 sudo[224925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfczffuoirkxevwrwartxtjdosyjaqlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031589.8242493-4188-60786190353995/AnsiballZ_copy.py'
Feb 02 11:26:30 compute-0 sudo[224925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:30 compute-0 python3.9[224927]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031589.8242493-4188-60786190353995/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:30 compute-0 sudo[224925]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:31 compute-0 sudo[225080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iueislybmgqbukrrwnkzauttnwgfkwzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031590.9462593-4233-75300940066346/AnsiballZ_stat.py'
Feb 02 11:26:31 compute-0 sudo[225080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:31 compute-0 python3.9[225082]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:31 compute-0 sudo[225080]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:31 compute-0 sudo[225204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wikofarcjepibftjinzasxgcbntgtxub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031590.9462593-4233-75300940066346/AnsiballZ_copy.py'
Feb 02 11:26:31 compute-0 sudo[225204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:31 compute-0 python3.9[225206]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031590.9462593-4233-75300940066346/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:31 compute-0 sudo[225204]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:31 compute-0 ceph-mon[74676]: pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:26:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:32.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:26:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:33 compute-0 ceph-mon[74676]: pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:33 compute-0 sudo[225357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjqzbiputrzsjxqyjbgbpgbmiexciqnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031592.7812078-4278-45229119253831/AnsiballZ_systemd.py'
Feb 02 11:26:33 compute-0 sudo[225357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:33 compute-0 python3.9[225359]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:26:33 compute-0 systemd[1]: Reloading.
Feb 02 11:26:33 compute-0 systemd-rc-local-generator[225387]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:26:33 compute-0 systemd-sysv-generator[225390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:26:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:33 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Feb 02 11:26:33 compute-0 sudo[225357]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:34.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:34.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:34 compute-0 sudo[225549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oivsyhiuwoxlzwwgcyjrxdpcbyyqkoiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031594.084275-4302-99963618902565/AnsiballZ_systemd.py'
Feb 02 11:26:34 compute-0 sudo[225549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:34 compute-0 python3.9[225551]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 02 11:26:34 compute-0 systemd[1]: Reloading.
Feb 02 11:26:34 compute-0 systemd-rc-local-generator[225578]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:26:34 compute-0 systemd-sysv-generator[225582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:26:35 compute-0 ceph-mon[74676]: pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:35 compute-0 systemd[1]: Reloading.
Feb 02 11:26:35 compute-0 systemd-sysv-generator[225618]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:26:35 compute-0 systemd-rc-local-generator[225614]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:26:35 compute-0 sudo[225549]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:35 compute-0 sshd-session[165888]: Connection closed by 192.168.122.30 port 39732
Feb 02 11:26:35 compute-0 sshd-session[165885]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:26:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:35 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Feb 02 11:26:35 compute-0 systemd[1]: session-53.scope: Consumed 3min 11.562s CPU time.
Feb 02 11:26:35 compute-0 systemd-logind[793]: Session 53 logged out. Waiting for processes to exit.
Feb 02 11:26:35 compute-0 systemd-logind[793]: Removed session 53.
Feb 02 11:26:35 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:26:35 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:26:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:36.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:36.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:36] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Feb 02 11:26:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:36] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Feb 02 11:26:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:26:37.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:26:37 compute-0 ceph-mon[74676]: pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:38.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:38.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:39 compute-0 ceph-mon[74676]: pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:40.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:40.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:41 compute-0 ceph-mon[74676]: pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:41 compute-0 sshd-session[225655]: Accepted publickey for zuul from 192.168.122.30 port 52372 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:26:41 compute-0 systemd-logind[793]: New session 54 of user zuul.
Feb 02 11:26:41 compute-0 systemd[1]: Started Session 54 of User zuul.
Feb 02 11:26:41 compute-0 sshd-session[225655]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:26:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0410002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:42.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:26:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:42.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:26:42 compute-0 python3.9[225809]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:26:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:43 compute-0 ceph-mon[74676]: pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:43 compute-0 sudo[225914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:26:43 compute-0 sudo[225914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:26:43 compute-0 sudo[225914]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:43 compute-0 python3.9[225989]: ansible-ansible.builtin.service_facts Invoked
Feb 02 11:26:43 compute-0 network[226007]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 11:26:43 compute-0 network[226008]: 'network-scripts' will be removed from distribution in near future.
Feb 02 11:26:43 compute-0 network[226009]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 11:26:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:44.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:44.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:26:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:26:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:45 compute-0 ceph-mon[74676]: pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:26:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:46.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:46 compute-0 sudo[226281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enrhtmnerzhsnlywntxejfpwzvlavbgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031606.4183242-96-147998685533688/AnsiballZ_setup.py'
Feb 02 11:26:46 compute-0 sudo[226281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:46 compute-0 python3.9[226283]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 11:26:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:46] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:26:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:46] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:26:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:26:47.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:26:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:47 compute-0 ceph-mon[74676]: pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:47 compute-0 sudo[226281]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:47 compute-0 sudo[226367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxzbpsliikqeuefyexrvmyxuuhjlanwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031606.4183242-96-147998685533688/AnsiballZ_dnf.py'
Feb 02 11:26:47 compute-0 sudo[226367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:47 compute-0 python3.9[226369]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:26:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:48.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:48.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:49 compute-0 ceph-mon[74676]: pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:50.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:50.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:51 compute-0 ceph-mon[74676]: pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:52 compute-0 podman[226375]: 2026-02-02 11:26:52.293717277 +0000 UTC m=+0.076157376 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 02 11:26:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:52.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:52.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:53 compute-0 ceph-mon[74676]: pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:26:53 compute-0 sudo[226367]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:54 compute-0 sudo[226565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqgygfsytqymeuiaulqcsqaycryoabom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031613.6236346-132-40471072302387/AnsiballZ_stat.py'
Feb 02 11:26:54 compute-0 sudo[226565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:54 compute-0 podman[226526]: 2026-02-02 11:26:54.042779736 +0000 UTC m=+0.072915923 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:26:54 compute-0 python3.9[226573]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:26:54 compute-0 sudo[226565]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:54.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:54.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:54 compute-0 sudo[226723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcejflnykorfqmfhkkfnaqxvrafakzid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031614.468116-162-43402557913909/AnsiballZ_command.py'
Feb 02 11:26:54 compute-0 sudo[226723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:55 compute-0 python3.9[226726]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:26:55 compute-0 sudo[226723]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:55 compute-0 ceph-mon[74676]: pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:55 compute-0 sudo[226878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojxzeftjhasmytxlxiiyolvchbbyicys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031615.3660772-192-102960904697990/AnsiballZ_stat.py'
Feb 02 11:26:55 compute-0 sudo[226878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:55 compute-0 python3.9[226880]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:26:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:55 compute-0 sudo[226878]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:56 compute-0 sudo[227030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvopjetvmsahfhoxzfxhrcrucjfsrhux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031615.9512734-216-226813647431632/AnsiballZ_command.py'
Feb 02 11:26:56 compute-0 sudo[227030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:26:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:56.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:56.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:56 compute-0 python3.9[227032]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:26:56 compute-0 sudo[227030]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:56 compute-0 sudo[227183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdiozxegigmptypwggusnepqqjmkgnpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031616.5643866-240-245658204328732/AnsiballZ_stat.py'
Feb 02 11:26:56 compute-0 sudo[227183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:56] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:26:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:26:56] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:26:57 compute-0 python3.9[227185]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:26:57 compute-0 sudo[227183]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:26:57.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:26:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:57 compute-0 ceph-mon[74676]: pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:57 compute-0 sudo[227307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofrfryszdppwmgnyztinxirzymoqxcns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031616.5643866-240-245658204328732/AnsiballZ_copy.py'
Feb 02 11:26:57 compute-0 sudo[227307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:57 compute-0 python3.9[227309]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031616.5643866-240-245658204328732/.source.iscsi _original_basename=.8feztve_ follow=False checksum=c86153c98a38ad73bdf14b948cd3cdc0e816c83f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:57 compute-0 sudo[227307]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:58 compute-0 sudo[227460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piqopezjhdceiwcslkzfyjhpdtrgkczl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031617.8301492-285-3708111259972/AnsiballZ_file.py'
Feb 02 11:26:58 compute-0 sudo[227460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:26:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:26:58.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:26:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:26:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:26:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:26:58.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:26:58 compute-0 python3.9[227462]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:58 compute-0 sudo[227460]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:59 compute-0 sudo[227613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvtulvsvqwywnivqyjqwdaplryzvwbln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031618.6359122-309-100237204817298/AnsiballZ_lineinfile.py'
Feb 02 11:26:59 compute-0 sudo[227613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:26:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408002c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:59 compute-0 python3.9[227615]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:26:59 compute-0 sudo[227613]: pam_unix(sudo:session): session closed for user root
Feb 02 11:26:59 compute-0 ceph-mon[74676]: pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:26:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:26:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:26:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:26:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:26:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:26:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:26:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:26:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:26:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:26:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:26:59 compute-0 sudo[227766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efizthqvmzovxrrqduqbzhitalzpurbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031619.4373457-336-16038205141360/AnsiballZ_systemd_service.py'
Feb 02 11:26:59 compute-0 sudo[227766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:00 compute-0 python3.9[227768]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:00.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:27:00 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb 02 11:27:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:00.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:00 compute-0 sudo[227766]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:00 compute-0 sudo[227922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bflipyfncbfoarppmsymlxsadqkumuhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031620.5271285-360-102945948031896/AnsiballZ_systemd_service.py'
Feb 02 11:27:00 compute-0 sudo[227922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:01 compute-0 python3.9[227924]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:01 compute-0 systemd[1]: Reloading.
Feb 02 11:27:01 compute-0 systemd-sysv-generator[227958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:27:01 compute-0 systemd-rc-local-generator[227953]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:27:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:01 compute-0 ceph-mon[74676]: pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:01 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 02 11:27:01 compute-0 systemd[1]: Starting Open-iSCSI...
Feb 02 11:27:01 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Feb 02 11:27:01 compute-0 systemd[1]: Started Open-iSCSI.
Feb 02 11:27:01 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb 02 11:27:01 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb 02 11:27:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04080035b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:01 compute-0 sudo[227922]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:27:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04080035b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:02.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:02 compute-0 ceph-mon[74676]: pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:27:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:02.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:02 compute-0 python3.9[228126]: ansible-ansible.builtin.service_facts Invoked
Feb 02 11:27:02 compute-0 network[228143]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 11:27:02 compute-0 network[228144]: 'network-scripts' will be removed from distribution in near future.
Feb 02 11:27:02 compute-0 network[228145]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 11:27:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:03 compute-0 sudo[228163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:27:03 compute-0 sudo[228163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:03 compute-0 sudo[228163]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:04.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:04.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:04 compute-0 ceph-mon[74676]: pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:05 compute-0 sudo[228446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjtwdzhrmrvbwwiwcopftwuyqjqtvaqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031625.4823918-429-202572668639760/AnsiballZ_dnf.py'
Feb 02 11:27:05 compute-0 sudo[228446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:05 compute-0 python3.9[228448]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:27:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:06.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:06.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:06 compute-0 ceph-mon[74676]: pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:06] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:27:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:06] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:27:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:27:07.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:27:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:27:07.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:27:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:07 compute-0 sudo[228451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:27:07 compute-0 sudo[228451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:07 compute-0 sudo[228451]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:07 compute-0 sudo[228476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:27:07 compute-0 sudo[228476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:07 compute-0 sudo[228476]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:27:07 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:27:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:27:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:27:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:27:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:27:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:27:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:27:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:27:07 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:27:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:27:07 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:27:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:27:07 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:27:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:27:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:27:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:27:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:27:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:27:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:27:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:27:07 compute-0 sudo[228537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:27:07 compute-0 sudo[228537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:07 compute-0 sudo[228537]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:07 compute-0 sudo[228563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:27:07 compute-0 sudo[228563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:08 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 11:27:08 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 11:27:08 compute-0 systemd[1]: Reloading.
Feb 02 11:27:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:08.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:08 compute-0 podman[228642]: 2026-02-02 11:27:08.343629487 +0000 UTC m=+0.042553632 container create 65dfaab42f5d5d79dc90c30edc986afe93f5fd80aaadf2421fa4c8868a1dea84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 11:27:08 compute-0 systemd-rc-local-generator[228678]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:27:08 compute-0 systemd-sysv-generator[228682]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:27:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:08.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:08 compute-0 podman[228642]: 2026-02-02 11:27:08.322345816 +0000 UTC m=+0.021269991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:27:08 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 11:27:08 compute-0 systemd[1]: Started libpod-conmon-65dfaab42f5d5d79dc90c30edc986afe93f5fd80aaadf2421fa4c8868a1dea84.scope.
Feb 02 11:27:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:27:08 compute-0 podman[228642]: 2026-02-02 11:27:08.656183382 +0000 UTC m=+0.355107547 container init 65dfaab42f5d5d79dc90c30edc986afe93f5fd80aaadf2421fa4c8868a1dea84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:27:08 compute-0 podman[228642]: 2026-02-02 11:27:08.664239433 +0000 UTC m=+0.363163568 container start 65dfaab42f5d5d79dc90c30edc986afe93f5fd80aaadf2421fa4c8868a1dea84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nash, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:27:08 compute-0 podman[228642]: 2026-02-02 11:27:08.667838196 +0000 UTC m=+0.366762341 container attach 65dfaab42f5d5d79dc90c30edc986afe93f5fd80aaadf2421fa4c8868a1dea84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nash, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:27:08 compute-0 fervent_nash[228788]: 167 167
Feb 02 11:27:08 compute-0 systemd[1]: libpod-65dfaab42f5d5d79dc90c30edc986afe93f5fd80aaadf2421fa4c8868a1dea84.scope: Deactivated successfully.
Feb 02 11:27:08 compute-0 podman[228642]: 2026-02-02 11:27:08.672084138 +0000 UTC m=+0.371008283 container died 65dfaab42f5d5d79dc90c30edc986afe93f5fd80aaadf2421fa4c8868a1dea84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:27:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5c90960338950038103f1c5801c684d372d4d7ae9be560c48f969e72145c665-merged.mount: Deactivated successfully.
Feb 02 11:27:08 compute-0 podman[228642]: 2026-02-02 11:27:08.799522273 +0000 UTC m=+0.498446418 container remove 65dfaab42f5d5d79dc90c30edc986afe93f5fd80aaadf2421fa4c8868a1dea84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nash, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:27:08 compute-0 systemd[1]: libpod-conmon-65dfaab42f5d5d79dc90c30edc986afe93f5fd80aaadf2421fa4c8868a1dea84.scope: Deactivated successfully.
Feb 02 11:27:08 compute-0 ceph-mon[74676]: pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 11:27:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 11:27:08 compute-0 systemd[1]: run-ra8650e5eabfd4fd9b67650a71fa33f86.service: Deactivated successfully.
Feb 02 11:27:08 compute-0 podman[228824]: 2026-02-02 11:27:08.944170622 +0000 UTC m=+0.049273934 container create 1d826a08a76a9ba0dedf0dc50ea5ecf1270d3f7e88ebe36d39d7cf573da88d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 11:27:08 compute-0 systemd[1]: Started libpod-conmon-1d826a08a76a9ba0dedf0dc50ea5ecf1270d3f7e88ebe36d39d7cf573da88d71.scope.
Feb 02 11:27:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d557011080432f0ef8829322e4079efa8c7d9c9262e1f6dbc3ad9c912731fdc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d557011080432f0ef8829322e4079efa8c7d9c9262e1f6dbc3ad9c912731fdc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d557011080432f0ef8829322e4079efa8c7d9c9262e1f6dbc3ad9c912731fdc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d557011080432f0ef8829322e4079efa8c7d9c9262e1f6dbc3ad9c912731fdc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d557011080432f0ef8829322e4079efa8c7d9c9262e1f6dbc3ad9c912731fdc9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:09 compute-0 podman[228824]: 2026-02-02 11:27:08.926590258 +0000 UTC m=+0.031693600 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:27:09 compute-0 podman[228824]: 2026-02-02 11:27:09.035469071 +0000 UTC m=+0.140572403 container init 1d826a08a76a9ba0dedf0dc50ea5ecf1270d3f7e88ebe36d39d7cf573da88d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:27:09 compute-0 podman[228824]: 2026-02-02 11:27:09.041676999 +0000 UTC m=+0.146780311 container start 1d826a08a76a9ba0dedf0dc50ea5ecf1270d3f7e88ebe36d39d7cf573da88d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:27:09 compute-0 podman[228824]: 2026-02-02 11:27:09.045430346 +0000 UTC m=+0.150533678 container attach 1d826a08a76a9ba0dedf0dc50ea5ecf1270d3f7e88ebe36d39d7cf573da88d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:27:09 compute-0 sudo[228446]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:09 compute-0 relaxed_archimedes[228842]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:27:09 compute-0 relaxed_archimedes[228842]: --> All data devices are unavailable
Feb 02 11:27:09 compute-0 systemd[1]: libpod-1d826a08a76a9ba0dedf0dc50ea5ecf1270d3f7e88ebe36d39d7cf573da88d71.scope: Deactivated successfully.
Feb 02 11:27:09 compute-0 podman[228824]: 2026-02-02 11:27:09.398647528 +0000 UTC m=+0.503750840 container died 1d826a08a76a9ba0dedf0dc50ea5ecf1270d3f7e88ebe36d39d7cf573da88d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:27:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-d557011080432f0ef8829322e4079efa8c7d9c9262e1f6dbc3ad9c912731fdc9-merged.mount: Deactivated successfully.
Feb 02 11:27:09 compute-0 podman[228824]: 2026-02-02 11:27:09.441284481 +0000 UTC m=+0.546387793 container remove 1d826a08a76a9ba0dedf0dc50ea5ecf1270d3f7e88ebe36d39d7cf573da88d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:27:09 compute-0 systemd[1]: libpod-conmon-1d826a08a76a9ba0dedf0dc50ea5ecf1270d3f7e88ebe36d39d7cf573da88d71.scope: Deactivated successfully.
Feb 02 11:27:09 compute-0 sudo[228563]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:09 compute-0 sudo[228968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:27:09 compute-0 sudo[228968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:09 compute-0 sudo[228968]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04200025b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:09 compute-0 sudo[229018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:27:09 compute-0 sudo[229018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:09 compute-0 sudo[229067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqweokkjzgbqpptiwjflbtohqeoaeljg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031629.3771756-456-70722170093003/AnsiballZ_file.py'
Feb 02 11:27:09 compute-0 sudo[229067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:09 compute-0 python3.9[229071]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 02 11:27:09 compute-0 sudo[229067]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:09 compute-0 sshd-session[228847]: Received disconnect from 91.224.92.78 port 49782:11:  [preauth]
Feb 02 11:27:09 compute-0 sshd-session[228847]: Disconnected from authenticating user root 91.224.92.78 port 49782 [preauth]
Feb 02 11:27:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:10 compute-0 podman[229155]: 2026-02-02 11:27:10.003336371 +0000 UTC m=+0.037585449 container create b3b54f92f87c63763943e5919a191e60f60c09dcabb5eddf8019e3f9d2adb6f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_northcutt, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:27:10 compute-0 systemd[1]: Started libpod-conmon-b3b54f92f87c63763943e5919a191e60f60c09dcabb5eddf8019e3f9d2adb6f7.scope.
Feb 02 11:27:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:27:10 compute-0 podman[229155]: 2026-02-02 11:27:10.076825859 +0000 UTC m=+0.111074957 container init b3b54f92f87c63763943e5919a191e60f60c09dcabb5eddf8019e3f9d2adb6f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 11:27:10 compute-0 podman[229155]: 2026-02-02 11:27:09.987089615 +0000 UTC m=+0.021338713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:27:10 compute-0 podman[229155]: 2026-02-02 11:27:10.11798776 +0000 UTC m=+0.152236838 container start b3b54f92f87c63763943e5919a191e60f60c09dcabb5eddf8019e3f9d2adb6f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_northcutt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:27:10 compute-0 reverent_northcutt[229203]: 167 167
Feb 02 11:27:10 compute-0 systemd[1]: libpod-b3b54f92f87c63763943e5919a191e60f60c09dcabb5eddf8019e3f9d2adb6f7.scope: Deactivated successfully.
Feb 02 11:27:10 compute-0 podman[229155]: 2026-02-02 11:27:10.122111948 +0000 UTC m=+0.156361186 container attach b3b54f92f87c63763943e5919a191e60f60c09dcabb5eddf8019e3f9d2adb6f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:27:10 compute-0 podman[229155]: 2026-02-02 11:27:10.122842069 +0000 UTC m=+0.157091157 container died b3b54f92f87c63763943e5919a191e60f60c09dcabb5eddf8019e3f9d2adb6f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5028f3d895109dcc0356c30184a73c31a63fc494331277c469d81bdd89bb7d41-merged.mount: Deactivated successfully.
Feb 02 11:27:10 compute-0 podman[229155]: 2026-02-02 11:27:10.169529328 +0000 UTC m=+0.203778406 container remove b3b54f92f87c63763943e5919a191e60f60c09dcabb5eddf8019e3f9d2adb6f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:27:10 compute-0 systemd[1]: libpod-conmon-b3b54f92f87c63763943e5919a191e60f60c09dcabb5eddf8019e3f9d2adb6f7.scope: Deactivated successfully.
Feb 02 11:27:10 compute-0 podman[229250]: 2026-02-02 11:27:10.30449195 +0000 UTC m=+0.043443518 container create 18db1acfb9f732e779b4cbc9af5ed7527204697663a95e9989f4ee871bf2bd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:27:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:10.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:10 compute-0 systemd[1]: Started libpod-conmon-18db1acfb9f732e779b4cbc9af5ed7527204697663a95e9989f4ee871bf2bd24.scope.
Feb 02 11:27:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:27:10 compute-0 sudo[229315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewxmhvkxmkwjoqbsbeunycisxrtmjqay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031629.962667-480-246410643624641/AnsiballZ_modprobe.py'
Feb 02 11:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aab745c554eb3db438a83ab060b91524d913fb33f2e1667c914b55e240b775c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:10 compute-0 sudo[229315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aab745c554eb3db438a83ab060b91524d913fb33f2e1667c914b55e240b775c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aab745c554eb3db438a83ab060b91524d913fb33f2e1667c914b55e240b775c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aab745c554eb3db438a83ab060b91524d913fb33f2e1667c914b55e240b775c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:10 compute-0 podman[229250]: 2026-02-02 11:27:10.284847416 +0000 UTC m=+0.023799014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:27:10 compute-0 podman[229250]: 2026-02-02 11:27:10.384535635 +0000 UTC m=+0.123487233 container init 18db1acfb9f732e779b4cbc9af5ed7527204697663a95e9989f4ee871bf2bd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:27:10 compute-0 podman[229250]: 2026-02-02 11:27:10.391545867 +0000 UTC m=+0.130497435 container start 18db1acfb9f732e779b4cbc9af5ed7527204697663a95e9989f4ee871bf2bd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:27:10 compute-0 podman[229250]: 2026-02-02 11:27:10.395256253 +0000 UTC m=+0.134207821 container attach 18db1acfb9f732e779b4cbc9af5ed7527204697663a95e9989f4ee871bf2bd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:27:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:10.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:10 compute-0 python3.9[229320]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb 02 11:27:10 compute-0 sudo[229315]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:10 compute-0 funny_shannon[229316]: {
Feb 02 11:27:10 compute-0 funny_shannon[229316]:     "1": [
Feb 02 11:27:10 compute-0 funny_shannon[229316]:         {
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "devices": [
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "/dev/loop3"
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             ],
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "lv_name": "ceph_lv0",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "lv_size": "21470642176",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "name": "ceph_lv0",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "tags": {
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.cluster_name": "ceph",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.crush_device_class": "",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.encrypted": "0",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.osd_id": "1",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.type": "block",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.vdo": "0",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:                 "ceph.with_tpm": "0"
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             },
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "type": "block",
Feb 02 11:27:10 compute-0 funny_shannon[229316]:             "vg_name": "ceph_vg0"
Feb 02 11:27:10 compute-0 funny_shannon[229316]:         }
Feb 02 11:27:10 compute-0 funny_shannon[229316]:     ]
Feb 02 11:27:10 compute-0 funny_shannon[229316]: }
Feb 02 11:27:10 compute-0 systemd[1]: libpod-18db1acfb9f732e779b4cbc9af5ed7527204697663a95e9989f4ee871bf2bd24.scope: Deactivated successfully.
Feb 02 11:27:10 compute-0 podman[229250]: 2026-02-02 11:27:10.688536205 +0000 UTC m=+0.427487763 container died 18db1acfb9f732e779b4cbc9af5ed7527204697663a95e9989f4ee871bf2bd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aab745c554eb3db438a83ab060b91524d913fb33f2e1667c914b55e240b775c-merged.mount: Deactivated successfully.
Feb 02 11:27:10 compute-0 podman[229250]: 2026-02-02 11:27:10.740423893 +0000 UTC m=+0.479375451 container remove 18db1acfb9f732e779b4cbc9af5ed7527204697663a95e9989f4ee871bf2bd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:27:10 compute-0 systemd[1]: libpod-conmon-18db1acfb9f732e779b4cbc9af5ed7527204697663a95e9989f4ee871bf2bd24.scope: Deactivated successfully.
Feb 02 11:27:10 compute-0 sudo[229018]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:10 compute-0 sudo[229418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:27:10 compute-0 sudo[229418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:10 compute-0 sudo[229418]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:10 compute-0 sudo[229465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:27:10 compute-0 sudo[229465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:10 compute-0 ceph-mon[74676]: pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:10 compute-0 sudo[229542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfchwkijaeqgevkvsvuypbkfqhudtjyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031630.7461352-504-125925898363681/AnsiballZ_stat.py'
Feb 02 11:27:10 compute-0 sudo[229542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:11 compute-0 python3.9[229544]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:27:11 compute-0 sudo[229542]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:11 compute-0 podman[229604]: 2026-02-02 11:27:11.291266483 +0000 UTC m=+0.038640369 container create 96d64e001d4ad1174d049f01336a41738f8a5876aa00ddc887ae280612c93ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_snyder, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:27:11 compute-0 systemd[1]: Started libpod-conmon-96d64e001d4ad1174d049f01336a41738f8a5876aa00ddc887ae280612c93ea1.scope.
Feb 02 11:27:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:27:11 compute-0 podman[229604]: 2026-02-02 11:27:11.275132191 +0000 UTC m=+0.022506097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:27:11 compute-0 podman[229604]: 2026-02-02 11:27:11.374402828 +0000 UTC m=+0.121776744 container init 96d64e001d4ad1174d049f01336a41738f8a5876aa00ddc887ae280612c93ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:27:11 compute-0 podman[229604]: 2026-02-02 11:27:11.381698007 +0000 UTC m=+0.129071893 container start 96d64e001d4ad1174d049f01336a41738f8a5876aa00ddc887ae280612c93ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:27:11 compute-0 podman[229604]: 2026-02-02 11:27:11.385684602 +0000 UTC m=+0.133058488 container attach 96d64e001d4ad1174d049f01336a41738f8a5876aa00ddc887ae280612c93ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:27:11 compute-0 elastic_snyder[229650]: 167 167
Feb 02 11:27:11 compute-0 systemd[1]: libpod-96d64e001d4ad1174d049f01336a41738f8a5876aa00ddc887ae280612c93ea1.scope: Deactivated successfully.
Feb 02 11:27:11 compute-0 podman[229604]: 2026-02-02 11:27:11.387476053 +0000 UTC m=+0.134849939 container died 96d64e001d4ad1174d049f01336a41738f8a5876aa00ddc887ae280612c93ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_snyder, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:27:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0aeec5d08fc72bf246c92f9fb354286527286d6a1d559aa178f50bbebc85dac-merged.mount: Deactivated successfully.
Feb 02 11:27:11 compute-0 podman[229604]: 2026-02-02 11:27:11.421946052 +0000 UTC m=+0.169319938 container remove 96d64e001d4ad1174d049f01336a41738f8a5876aa00ddc887ae280612c93ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb 02 11:27:11 compute-0 systemd[1]: libpod-conmon-96d64e001d4ad1174d049f01336a41738f8a5876aa00ddc887ae280612c93ea1.scope: Deactivated successfully.
Feb 02 11:27:11 compute-0 sudo[229739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yacmfhiiocfycexznwwbwfhpsvtrzhdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031630.7461352-504-125925898363681/AnsiballZ_copy.py'
Feb 02 11:27:11 compute-0 sudo[229739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:11 compute-0 podman[229747]: 2026-02-02 11:27:11.562717509 +0000 UTC m=+0.045491045 container create 9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:27:11 compute-0 systemd[1]: Started libpod-conmon-9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423.scope.
Feb 02 11:27:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a760f2542845580ee1cc42e316db3570f96b227fffc8360328c19747ac411b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a760f2542845580ee1cc42e316db3570f96b227fffc8360328c19747ac411b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a760f2542845580ee1cc42e316db3570f96b227fffc8360328c19747ac411b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a760f2542845580ee1cc42e316db3570f96b227fffc8360328c19747ac411b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:27:11 compute-0 podman[229747]: 2026-02-02 11:27:11.543815157 +0000 UTC m=+0.026588723 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:27:11 compute-0 podman[229747]: 2026-02-02 11:27:11.651132816 +0000 UTC m=+0.133906372 container init 9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:27:11 compute-0 podman[229747]: 2026-02-02 11:27:11.657967672 +0000 UTC m=+0.140741208 container start 9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:27:11 compute-0 podman[229747]: 2026-02-02 11:27:11.661960546 +0000 UTC m=+0.144734112 container attach 9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:27:11 compute-0 python3.9[229744]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031630.7461352-504-125925898363681/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:11 compute-0 sudo[229739]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:27:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04200025b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:12 compute-0 sudo[229984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoptdlfklwgmpyzhsoagbloiucxwpmor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031631.9748938-552-70913531024563/AnsiballZ_lineinfile.py'
Feb 02 11:27:12 compute-0 sudo[229984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:12.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:12 compute-0 lvm[229991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:27:12 compute-0 lvm[229991]: VG ceph_vg0 finished
Feb 02 11:27:12 compute-0 fervent_matsumoto[229765]: {}
Feb 02 11:27:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:12.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:12 compute-0 systemd[1]: libpod-9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423.scope: Deactivated successfully.
Feb 02 11:27:12 compute-0 systemd[1]: libpod-9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423.scope: Consumed 1.063s CPU time.
Feb 02 11:27:12 compute-0 conmon[229765]: conmon 9da3ccb804b59ebc1614 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423.scope/container/memory.events
Feb 02 11:27:12 compute-0 podman[229747]: 2026-02-02 11:27:12.449463354 +0000 UTC m=+0.932236890 container died 9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:27:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a760f2542845580ee1cc42e316db3570f96b227fffc8360328c19747ac411b1-merged.mount: Deactivated successfully.
Feb 02 11:27:12 compute-0 python3.9[229989]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:12 compute-0 podman[229747]: 2026-02-02 11:27:12.492411835 +0000 UTC m=+0.975185371 container remove 9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:27:12 compute-0 systemd[1]: libpod-conmon-9da3ccb804b59ebc1614df838e102f030a45359f1cc975fc26fd545ec66f4423.scope: Deactivated successfully.
Feb 02 11:27:12 compute-0 sudo[229984]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:12 compute-0 sudo[229465]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:27:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:27:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:27:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:27:12 compute-0 sudo[230030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:27:12 compute-0 sudo[230030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:12 compute-0 sudo[230030]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:13 compute-0 ceph-mon[74676]: pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:27:13 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:27:13 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:27:13 compute-0 sudo[230181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmsvxxlsyjlboikfmntaeqbycqglcwps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031632.6689413-576-117358713171729/AnsiballZ_systemd.py'
Feb 02 11:27:13 compute-0 sudo[230181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040013d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:13 compute-0 python3.9[230183]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:27:13 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 02 11:27:13 compute-0 systemd[1]: Stopped Load Kernel Modules.
Feb 02 11:27:13 compute-0 systemd[1]: Stopping Load Kernel Modules...
Feb 02 11:27:13 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 02 11:27:13 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 02 11:27:13 compute-0 sudo[230181]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:14 compute-0 sudo[230338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqsqjkmldtqqbucuuzfkuaamhloxynuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031633.9663033-600-172738266569797/AnsiballZ_command.py'
Feb 02 11:27:14 compute-0 sudo[230338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:14.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:14 compute-0 python3.9[230340]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:27:14 compute-0 sudo[230338]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:14.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:27:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:27:14 compute-0 sudo[230492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdzfvswhejoebttijtjqxjtzrpothziv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031634.7383075-630-104645720420826/AnsiballZ_stat.py'
Feb 02 11:27:14 compute-0 sudo[230492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04200025b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:15 compute-0 python3.9[230494]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:27:15 compute-0 ceph-mon[74676]: pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:27:15 compute-0 sudo[230492]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:15 compute-0 sudo[230645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyjgmzqsrctbuonooiqsroveijxbooxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031635.3894572-657-270323605729107/AnsiballZ_stat.py'
Feb 02 11:27:15 compute-0 sudo[230645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:15 compute-0 python3.9[230647]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:27:15 compute-0 sudo[230645]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:16 compute-0 sudo[230768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzubbxoyivzfzpnupioirtscnlqoystx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031635.3894572-657-270323605729107/AnsiballZ_copy.py'
Feb 02 11:27:16 compute-0 sudo[230768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:16.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:16 compute-0 python3.9[230770]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031635.3894572-657-270323605729107/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:16 compute-0 sudo[230768]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:16.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:16 compute-0 sudo[230920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zojojmeyecqcvufdltoijsxhjbapehgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031636.5273871-702-273095383528929/AnsiballZ_command.py'
Feb 02 11:27:16 compute-0 sudo[230920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:16 compute-0 python3.9[230922]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:27:16 compute-0 sudo[230920]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:16] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:27:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:16] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:27:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:27:17.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:27:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:27:17.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:27:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:17 compute-0 ceph-mon[74676]: pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:17 compute-0 sudo[231074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftuxgofhuyohrdryczakvthfyradizdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031637.1524198-726-200038669423406/AnsiballZ_lineinfile.py'
Feb 02 11:27:17 compute-0 sudo[231074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:17 compute-0 python3.9[231076]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:17 compute-0 sudo[231074]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:18 compute-0 sudo[231227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkdshcxxmfyoocshoaatntdfcyfjqsav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031637.7830002-750-231586170408846/AnsiballZ_replace.py'
Feb 02 11:27:18 compute-0 sudo[231227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:18.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:18 compute-0 ceph-mon[74676]: pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:18.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:18 compute-0 python3.9[231229]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:18 compute-0 sudo[231227]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:18 compute-0 sudo[231380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imxdpupdwqfivxzwrxcqwiawanrjwghk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031638.6534011-774-158323322394576/AnsiballZ_replace.py'
Feb 02 11:27:18 compute-0 sudo[231380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:19 compute-0 python3.9[231382]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:19 compute-0 sudo[231380]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:19 compute-0 sudo[231533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzdhylfgcdahjqxqqptukqshjwjzedlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031639.4177136-801-9798850232883/AnsiballZ_lineinfile.py'
Feb 02 11:27:19 compute-0 sudo[231533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:19 compute-0 python3.9[231535]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:19 compute-0 sudo[231533]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:20 compute-0 sudo[231685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoncrvhwgtidjsumzdcibrlzafsmzfkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031640.023687-801-171213341066938/AnsiballZ_lineinfile.py'
Feb 02 11:27:20 compute-0 sudo[231685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:20.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:20.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:20 compute-0 python3.9[231687]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:20 compute-0 sudo[231685]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:20 compute-0 sudo[231838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usmnveiebhzyvzghkljazmjltcgefmft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031640.6368191-801-246551096075490/AnsiballZ_lineinfile.py'
Feb 02 11:27:20 compute-0 sudo[231838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:20 compute-0 ceph-mon[74676]: pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:21 compute-0 python3.9[231840]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:21 compute-0 sudo[231838]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:21 compute-0 sudo[231990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izcitqidbghycobmaeabjnlzieoglfjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031641.25314-801-174607116903739/AnsiballZ_lineinfile.py'
Feb 02 11:27:21 compute-0 sudo[231990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:21 compute-0 python3.9[231992]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:21 compute-0 sudo[231990]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:27:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:22 compute-0 sudo[232143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdrsschvlyrohckkywutamgvgeqrymfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031641.9744773-888-191570628340552/AnsiballZ_stat.py'
Feb 02 11:27:22 compute-0 sudo[232143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:22.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:22.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:22 compute-0 python3.9[232145]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:27:22 compute-0 sudo[232143]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:27:22.660 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:27:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:27:22.660 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:27:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:27:22.660 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:27:22 compute-0 sudo[232310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-repkfmnntbbuwsmphyuhrmkzvybmyhts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031642.6887503-912-84591908588119/AnsiballZ_command.py'
Feb 02 11:27:22 compute-0 sudo[232310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:23 compute-0 ceph-mon[74676]: pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:27:23 compute-0 podman[232272]: 2026-02-02 11:27:23.050567643 +0000 UTC m=+0.105669062 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Feb 02 11:27:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:23 compute-0 python3.9[232317]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:27:23 compute-0 sudo[232310]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:23 compute-0 sudo[232353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:27:23 compute-0 sudo[232353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:23 compute-0 sudo[232353]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:23 compute-0 sudo[232504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjznztgsfpchbrpsnqllujcknpcsfvyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031643.5970168-939-117193566603452/AnsiballZ_systemd_service.py'
Feb 02 11:27:23 compute-0 sudo[232504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:24 compute-0 python3.9[232506]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:24 compute-0 podman[232507]: 2026-02-02 11:27:24.264470831 +0000 UTC m=+0.051849359 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 02 11:27:24 compute-0 systemd[1]: Listening on multipathd control socket.
Feb 02 11:27:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:24.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:24 compute-0 sudo[232504]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:24.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:24 compute-0 sudo[232676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbatqcldnoxuabkykaeucwhzsffgvcxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031644.5716834-963-162391865020966/AnsiballZ_systemd_service.py'
Feb 02 11:27:24 compute-0 sudo[232676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:25 compute-0 ceph-mon[74676]: pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:25 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:25 compute-0 python3.9[232679]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:25 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb 02 11:27:25 compute-0 udevadm[232684]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb 02 11:27:25 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb 02 11:27:25 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 02 11:27:25 compute-0 multipathd[232688]: --------start up--------
Feb 02 11:27:25 compute-0 multipathd[232688]: read /etc/multipath.conf
Feb 02 11:27:25 compute-0 multipathd[232688]: path checkers start up
Feb 02 11:27:25 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 02 11:27:25 compute-0 sudo[232676]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:25 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:25 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:26.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:26 compute-0 sudo[232846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgfrwlsgfqdpnmfjognsanvwsaihhtuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031646.0668192-999-239716259321735/AnsiballZ_file.py'
Feb 02 11:27:26 compute-0 sudo[232846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:27:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:26.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:27:26 compute-0 python3.9[232848]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 02 11:27:26 compute-0 sudo[232846]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:27] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:27:27 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:27] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:27:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:27:27.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:27:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:27:27.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:27:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:27 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:27 compute-0 sudo[232999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aefaenysijvjrbvecwlhujopypnvvrge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031646.8431962-1023-193952725852441/AnsiballZ_modprobe.py'
Feb 02 11:27:27 compute-0 sudo[232999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:27 compute-0 ceph-mon[74676]: pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:27 compute-0 python3.9[233001]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb 02 11:27:27 compute-0 kernel: Key type psk registered
Feb 02 11:27:27 compute-0 sudo[232999]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:27 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:27 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:28 compute-0 sudo[233161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uebozoguezxlxgomgzrgpnyovkywytpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031647.6971405-1047-122866675597933/AnsiballZ_stat.py'
Feb 02 11:27:28 compute-0 sudo[233161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:28 compute-0 python3.9[233163]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:27:28 compute-0 sudo[233161]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:28.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:28.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:28 compute-0 sudo[233284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epqjzoqlfzfwslgdbbsdyehmtcenezfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031647.6971405-1047-122866675597933/AnsiballZ_copy.py'
Feb 02 11:27:28 compute-0 sudo[233284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:28 compute-0 python3.9[233286]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031647.6971405-1047-122866675597933/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:28 compute-0 sudo[233284]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:29 compute-0 ceph-mon[74676]: pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:27:29
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'default.rgw.control', '.nfs', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root']
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:27:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:27:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:27:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112729 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:27:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:30 compute-0 sudo[233438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psuwrdahklckfqxxxnzqpjpnhqyinoif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031649.8458502-1095-206113824019795/AnsiballZ_lineinfile.py'
Feb 02 11:27:30 compute-0 sudo[233438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:30.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:30 compute-0 python3.9[233440]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:30 compute-0 sudo[233438]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:30.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:27:30 compute-0 ceph-mon[74676]: pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:27:30 compute-0 sudo[233590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ummhgtuqsanzmghlpdxzamyfcmuoqyzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031650.5554864-1119-235988619010537/AnsiballZ_systemd.py'
Feb 02 11:27:30 compute-0 sudo[233590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:31 compute-0 python3.9[233592]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:27:31 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 02 11:27:31 compute-0 systemd[1]: Stopped Load Kernel Modules.
Feb 02 11:27:31 compute-0 systemd[1]: Stopping Load Kernel Modules...
Feb 02 11:27:31 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 02 11:27:31 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 02 11:27:31 compute-0 sudo[233590]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:27:31 compute-0 sudo[233748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syrrmpyogpytlmdoujcqflkcojvctcmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031651.568273-1143-252490104754339/AnsiballZ_dnf.py'
Feb 02 11:27:31 compute-0 sudo[233748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:32 compute-0 python3.9[233750]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 11:27:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:32.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:32.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:32 compute-0 ceph-mon[74676]: pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:27:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:27:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:34.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:34.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:34 compute-0 ceph-mon[74676]: pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:27:34 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Feb 02 11:27:34 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:34.948496) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:27:34 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Feb 02 11:27:34 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031654948550, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4193, "num_deletes": 502, "total_data_size": 8518627, "memory_usage": 8632960, "flush_reason": "Manual Compaction"}
Feb 02 11:27:34 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Feb 02 11:27:34 compute-0 systemd[1]: Reloading.
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031655012233, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8264757, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13175, "largest_seqno": 17367, "table_properties": {"data_size": 8247127, "index_size": 11884, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36422, "raw_average_key_size": 19, "raw_value_size": 8210767, "raw_average_value_size": 4433, "num_data_blocks": 520, "num_entries": 1852, "num_filter_entries": 1852, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031215, "oldest_key_time": 1770031215, "file_creation_time": 1770031654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 63830 microseconds, and 12930 cpu microseconds.
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.012324) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8264757 bytes OK
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.012355) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.026316) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.026383) EVENT_LOG_v1 {"time_micros": 1770031655026371, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.026416) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8501878, prev total WAL file size 8501878, number of live WAL files 2.
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.028805) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8071KB)], [32(11MB)]
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031655028860, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 20296890, "oldest_snapshot_seqno": -1}
Feb 02 11:27:35 compute-0 systemd-rc-local-generator[233784]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:27:35 compute-0 systemd-sysv-generator[233788]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:27:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5057 keys, 15475585 bytes, temperature: kUnknown
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031655139910, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15475585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15437149, "index_size": 24694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 126512, "raw_average_key_size": 25, "raw_value_size": 15340925, "raw_average_value_size": 3033, "num_data_blocks": 1038, "num_entries": 5057, "num_filter_entries": 5057, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770031655, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.140500) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15475585 bytes
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.143715) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.2 rd, 138.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.5 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(4.3) write-amplify(1.9) OK, records in: 6079, records dropped: 1022 output_compression: NoCompression
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.143758) EVENT_LOG_v1 {"time_micros": 1770031655143730, "job": 14, "event": "compaction_finished", "compaction_time_micros": 111395, "compaction_time_cpu_micros": 29660, "output_level": 6, "num_output_files": 1, "total_output_size": 15475585, "num_input_records": 6079, "num_output_records": 5057, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031655144544, "job": 14, "event": "table_file_deletion", "file_number": 34}
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031655145635, "job": 14, "event": "table_file_deletion", "file_number": 32}
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.028656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.145723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.145730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.145732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.145751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:27:35 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:27:35.145753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:27:35 compute-0 systemd[1]: Reloading.
Feb 02 11:27:35 compute-0 systemd-sysv-generator[233822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:27:35 compute-0 systemd-rc-local-generator[233819]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:27:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:35 compute-0 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 02 11:27:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:27:35 compute-0 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 02 11:27:35 compute-0 lvm[233865]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:27:35 compute-0 lvm[233865]: VG ceph_vg0 finished
Feb 02 11:27:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:36 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 11:27:36 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 11:27:36 compute-0 systemd[1]: Reloading.
Feb 02 11:27:36 compute-0 systemd-rc-local-generator[233917]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:27:36 compute-0 systemd-sysv-generator[233921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:27:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:36.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:36 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 11:27:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:36.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:36 compute-0 ceph-mon[74676]: pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:27:36 compute-0 sudo[233748]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:36] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:27:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:36] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:27:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:27:37.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:27:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:37 compute-0 sudo[235218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crdhakibbjzamelyvuguvycubnigifos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031657.1847835-1167-271368446830670/AnsiballZ_systemd_service.py'
Feb 02 11:27:37 compute-0 sudo[235218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 11:27:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 11:27:37 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.574s CPU time.
Feb 02 11:27:37 compute-0 systemd[1]: run-ra00eb5da58fa46edb5287ad1c07e3b3b.service: Deactivated successfully.
Feb 02 11:27:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:37 compute-0 python3.9[235220]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:27:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:27:37 compute-0 systemd[1]: Stopping Open-iSCSI...
Feb 02 11:27:37 compute-0 iscsid[227965]: iscsid shutting down.
Feb 02 11:27:37 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Feb 02 11:27:37 compute-0 systemd[1]: Stopped Open-iSCSI.
Feb 02 11:27:37 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 02 11:27:37 compute-0 systemd[1]: Starting Open-iSCSI...
Feb 02 11:27:37 compute-0 systemd[1]: Started Open-iSCSI.
Feb 02 11:27:37 compute-0 sudo[235218]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:38 compute-0 sudo[235377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avlhpcewrtubhnedaxcuzpcprgflmhrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031658.0650852-1191-135300755044440/AnsiballZ_systemd_service.py'
Feb 02 11:27:38 compute-0 sudo[235377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:27:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:38.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:27:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:38.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:38 compute-0 python3.9[235379]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:27:38 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb 02 11:27:38 compute-0 multipathd[232688]: exit (signal)
Feb 02 11:27:38 compute-0 multipathd[232688]: --------shut down-------
Feb 02 11:27:38 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Feb 02 11:27:38 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb 02 11:27:38 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 02 11:27:38 compute-0 multipathd[235385]: --------start up--------
Feb 02 11:27:38 compute-0 multipathd[235385]: read /etc/multipath.conf
Feb 02 11:27:38 compute-0 multipathd[235385]: path checkers start up
Feb 02 11:27:38 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 02 11:27:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:38 : epoch 6980899c : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:27:38 compute-0 sudo[235377]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:39 compute-0 ceph-mon[74676]: pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:27:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:39 compute-0 python3.9[235543]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 11:27:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:27:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:27:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:40.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:27:40 compute-0 sudo[235698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xccmvfvidicuabvzfdwucixepeuvdvlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031660.1352496-1243-97234775125839/AnsiballZ_file.py'
Feb 02 11:27:40 compute-0 sudo[235698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:40.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:40 compute-0 python3.9[235700]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:40 compute-0 sudo[235698]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:41 compute-0 ceph-mon[74676]: pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:27:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:41 compute-0 sudo[235851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ispjctanecglodwdpguknratkyjaadld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031661.145981-1276-247225312820678/AnsiballZ_systemd_service.py'
Feb 02 11:27:41 compute-0 sudo[235851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:41 compute-0 python3.9[235853]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 11:27:41 compute-0 systemd[1]: Reloading.
Feb 02 11:27:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:27:41 compute-0 systemd-sysv-generator[235887]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:27:41 compute-0 systemd-rc-local-generator[235882]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:27:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:27:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:27:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:42 compute-0 sudo[235851]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:42.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:27:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:42.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:27:42 compute-0 python3.9[236040]: ansible-ansible.builtin.service_facts Invoked
Feb 02 11:27:42 compute-0 network[236057]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 11:27:42 compute-0 network[236058]: 'network-scripts' will be removed from distribution in near future.
Feb 02 11:27:42 compute-0 network[236059]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 11:27:43 compute-0 ceph-mon[74676]: pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:27:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:43 compute-0 sudo[236074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:27:43 compute-0 sudo[236074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:27:43 compute-0 sudo[236074]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Feb 02 11:27:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:27:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:44.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:27:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:44.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:27:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:27:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:44 : epoch 6980899c : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:27:45 compute-0 ceph-mon[74676]: pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Feb 02 11:27:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:27:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:27:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a6c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:46 compute-0 sudo[236359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeuentbgvfnmlklgidpmiwdqycpmflcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031665.872603-1333-249005899514904/AnsiballZ_systemd_service.py'
Feb 02 11:27:46 compute-0 sudo[236359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:46.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:46.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:46 compute-0 python3.9[236361]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:46 compute-0 sudo[236359]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:46 compute-0 sudo[236513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlrganffpvwnmsprpfizvzgasvbfhdld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031666.6609468-1333-268505106785316/AnsiballZ_systemd_service.py'
Feb 02 11:27:46 compute-0 sudo[236513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:46] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Feb 02 11:27:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:46] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Feb 02 11:27:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:27:47.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:27:47 compute-0 ceph-mon[74676]: pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:27:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:47 compute-0 python3.9[236515]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:47 compute-0 sudo[236513]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:47 compute-0 sudo[236667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yglitckiekuxflisxqicwsovfaegsdiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031667.4472237-1333-250020971770649/AnsiballZ_systemd_service.py'
Feb 02 11:27:47 compute-0 sudo[236667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:27:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:48 compute-0 python3.9[236669]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:48 compute-0 sudo[236667]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:48.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:48.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:48 compute-0 sudo[236820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmjeefpfjlfchoqbsvyljiapwjwtmoas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031668.3003476-1333-139649657325783/AnsiballZ_systemd_service.py'
Feb 02 11:27:48 compute-0 sudo[236820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:48 compute-0 python3.9[236822]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:48 compute-0 sudo[236820]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:49 compute-0 ceph-mon[74676]: pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:27:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a6e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:49 compute-0 sudo[236974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmfbmqodtgfaapodfpjzyvxeiqsiurwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031669.0573664-1333-164264033945345/AnsiballZ_systemd_service.py'
Feb 02 11:27:49 compute-0 sudo[236974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:49 compute-0 python3.9[236976]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:49 compute-0 sudo[236974]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:27:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112749 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:27:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:50 compute-0 sudo[237128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axrdymkyzrdxupqeloucfklcwrylmprs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031669.8736002-1333-11258333753007/AnsiballZ_systemd_service.py'
Feb 02 11:27:50 compute-0 sudo[237128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:50.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:50.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:50 compute-0 python3.9[237130]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:50 compute-0 sudo[237128]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:50 compute-0 sudo[237282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkqfhxcsbumoaavqyfroswesnykpuqvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031670.6939101-1333-50663414107807/AnsiballZ_systemd_service.py'
Feb 02 11:27:50 compute-0 sudo[237282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:51 compute-0 python3.9[237284]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:51 compute-0 ceph-mon[74676]: pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:27:51 compute-0 sudo[237282]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:51 compute-0 sudo[237436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdlfcrtbecbcsljjqcpvkpgzztidbrsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031671.497306-1333-229486448730947/AnsiballZ_systemd_service.py'
Feb 02 11:27:51 compute-0 sudo[237436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:27:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:52 compute-0 python3.9[237438]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:27:52 compute-0 sudo[237436]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:52.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:52 compute-0 ceph-mon[74676]: pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:27:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:52.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:52 compute-0 sudo[237590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mppemmxmyerxfvfjhunbwraprotryngs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031672.6029038-1510-154421954196954/AnsiballZ_file.py'
Feb 02 11:27:52 compute-0 sudo[237590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:53 compute-0 python3.9[237592]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:53 compute-0 sudo[237590]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:53 compute-0 podman[237593]: 2026-02-02 11:27:53.311502453 +0000 UTC m=+0.087570273 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 02 11:27:53 compute-0 sudo[237768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yikyidppluwwhvxbgoreohbkutvntzpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031673.29479-1510-188816177221482/AnsiballZ_file.py'
Feb 02 11:27:53 compute-0 sudo[237768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04040033e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:53 compute-0 python3.9[237771]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:53 compute-0 sudo[237768]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:27:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:54 compute-0 sudo[237921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akwpyaxhohdwvmwkgliljcrrktbjynqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031673.9092197-1510-146441733244208/AnsiballZ_file.py'
Feb 02 11:27:54 compute-0 sudo[237921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:54.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:54 compute-0 python3.9[237923]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:54 compute-0 sudo[237921]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:54.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:55 compute-0 sudo[238085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjhialsebmvejtdfaxrhhlbolgnwqbzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031674.5425491-1510-185147751674144/AnsiballZ_file.py'
Feb 02 11:27:55 compute-0 sudo[238085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:55 compute-0 podman[238048]: 2026-02-02 11:27:55.162837845 +0000 UTC m=+0.052855447 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 02 11:27:55 compute-0 python3.9[238095]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:55 compute-0 sudo[238085]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:55 compute-0 ceph-mon[74676]: pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:27:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:27:55 compute-0 sudo[238248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdkykdjsfasoqwovrqzacwxgiwonfbrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031675.668458-1510-133351072213261/AnsiballZ_file.py'
Feb 02 11:27:55 compute-0 sudo[238248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:56 compute-0 python3.9[238250]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:56 compute-0 sudo[238248]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:27:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:56.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:56.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:56 compute-0 sudo[238400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urlrluizymibvtvfkkkqwnxdzgrnlvju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031676.2983227-1510-65749608717208/AnsiballZ_file.py'
Feb 02 11:27:56 compute-0 sudo[238400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:56 compute-0 python3.9[238402]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:56 compute-0 sudo[238400]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:56] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Feb 02 11:27:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:27:56] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Feb 02 11:27:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:27:57.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:27:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:27:57.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:27:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:57 compute-0 sudo[238553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cozjmhyufuferksbinheeffprcovniml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031676.94861-1510-194595319870292/AnsiballZ_file.py'
Feb 02 11:27:57 compute-0 sudo[238553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:57 compute-0 ceph-mon[74676]: pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:27:57 compute-0 python3.9[238555]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:57 compute-0 sudo[238553]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420002a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:57 compute-0 sudo[238706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lefhlefdbyorjatkafchzsgflrxnswzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031677.5489123-1510-35465075092303/AnsiballZ_file.py'
Feb 02 11:27:57 compute-0 sudo[238706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:27:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:58 compute-0 python3.9[238708]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:58 compute-0 sudo[238706]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:58 compute-0 ceph-mon[74676]: pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:27:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:27:58.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:27:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:27:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:27:58.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:27:58 compute-0 sudo[238858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqdsiuxpbrsrommmklsatjceydloqtsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031678.2304075-1681-211041372781272/AnsiballZ_file.py'
Feb 02 11:27:58 compute-0 sudo[238858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:58 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb 02 11:27:58 compute-0 python3.9[238860]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:59 compute-0 sudo[238858]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:27:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:27:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:27:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:27:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:27:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:27:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:27:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:27:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:27:59 compute-0 sudo[239013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bofuidizcipnsipoxsfvvckjsilfvvxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031679.3787801-1681-16816314140488/AnsiballZ_file.py'
Feb 02 11:27:59 compute-0 sudo[239013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:27:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:27:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:27:59 compute-0 python3.9[239015]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:27:59 compute-0 sudo[239013]: pam_unix(sudo:session): session closed for user root
Feb 02 11:27:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:27:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420002a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:00 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 02 11:28:00 compute-0 sudo[239166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glopjaooxtowxiwkrwccxxcnifwhyeiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031680.0207982-1681-61676955187448/AnsiballZ_file.py'
Feb 02 11:28:00 compute-0 sudo[239166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:00.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:00.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:00 compute-0 python3.9[239168]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:28:00 compute-0 sudo[239166]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:00 compute-0 sudo[239319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vokohzijqxmnzkppqrskskriakzdqnes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031680.6709359-1681-274530569231026/AnsiballZ_file.py'
Feb 02 11:28:00 compute-0 sudo[239319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:00 compute-0 ceph-mon[74676]: pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:28:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:01 compute-0 python3.9[239321]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:28:01 compute-0 sudo[239319]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.323319) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031681323366, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 450, "num_deletes": 250, "total_data_size": 459701, "memory_usage": 469208, "flush_reason": "Manual Compaction"}
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031681326974, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 341479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17368, "largest_seqno": 17817, "table_properties": {"data_size": 339076, "index_size": 503, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6243, "raw_average_key_size": 19, "raw_value_size": 334266, "raw_average_value_size": 1041, "num_data_blocks": 23, "num_entries": 321, "num_filter_entries": 321, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031655, "oldest_key_time": 1770031655, "file_creation_time": 1770031681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 3690 microseconds, and 1332 cpu microseconds.
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.327012) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 341479 bytes OK
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.327031) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.328202) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.328221) EVENT_LOG_v1 {"time_micros": 1770031681328216, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.328246) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 457033, prev total WAL file size 457033, number of live WAL files 2.
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.328691) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(333KB)], [35(14MB)]
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031681328842, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 15817064, "oldest_snapshot_seqno": -1}
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4877 keys, 11870753 bytes, temperature: kUnknown
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031681525958, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 11870753, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11837832, "index_size": 19627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 123168, "raw_average_key_size": 25, "raw_value_size": 11749004, "raw_average_value_size": 2409, "num_data_blocks": 817, "num_entries": 4877, "num_filter_entries": 4877, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770031681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:28:01 compute-0 sudo[239472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwcrxldwbcldxzrsyqlzqojaympgyphw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031681.325371-1681-144229236506919/AnsiballZ_file.py'
Feb 02 11:28:01 compute-0 sudo[239472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.526273) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 11870753 bytes
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.577208) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.2 rd, 60.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.8 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(81.1) write-amplify(34.8) OK, records in: 5378, records dropped: 501 output_compression: NoCompression
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.577260) EVENT_LOG_v1 {"time_micros": 1770031681577239, "job": 16, "event": "compaction_finished", "compaction_time_micros": 197201, "compaction_time_cpu_micros": 22564, "output_level": 6, "num_output_files": 1, "total_output_size": 11870753, "num_input_records": 5378, "num_output_records": 4877, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031681577548, "job": 16, "event": "table_file_deletion", "file_number": 37}
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031681579937, "job": 16, "event": "table_file_deletion", "file_number": 35}
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.328591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.580001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.580008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.580011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.580013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:28:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:28:01.580015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:28:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:01 compute-0 python3.9[239474]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:28:01 compute-0 sudo[239472]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:28:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:02 compute-0 sudo[239624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nforrkxwdtnapfrkepurgkvbebdiojns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031681.9494-1681-92033426731508/AnsiballZ_file.py'
Feb 02 11:28:02 compute-0 sudo[239624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:02.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:02 compute-0 python3.9[239626]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:28:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:02.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:02 compute-0 sudo[239624]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:02 compute-0 ceph-mon[74676]: pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:28:02 compute-0 sudo[239777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtfuzragbfshzbojfxsljtumqedquxvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031682.59093-1681-194531681977525/AnsiballZ_file.py'
Feb 02 11:28:02 compute-0 sudo[239777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:03 compute-0 python3.9[239779]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:28:03 compute-0 sudo[239777]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420002a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:03 compute-0 sudo[239929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpuxwkmjhlquvyitpbdnnwzhbumqbkyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031683.2248642-1681-206013351333669/AnsiballZ_file.py'
Feb 02 11:28:03 compute-0 sudo[239929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420002a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:03 compute-0 sudo[239933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:28:03 compute-0 sudo[239933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:03 compute-0 sudo[239933]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:03 compute-0 python3.9[239931]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:28:03 compute-0 sudo[239929]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:04 compute-0 sudo[240107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urmmopdwuihcinhtvlllrtywwenqpbco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031684.156961-1855-253764164000615/AnsiballZ_command.py'
Feb 02 11:28:04 compute-0 sudo[240107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:04.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:04 compute-0 python3.9[240109]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:28:04 compute-0 sudo[240107]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:04 compute-0 ceph-mon[74676]: pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:05 compute-0 python3.9[240262]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 11:28:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:28:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420002a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:06 compute-0 sudo[240413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnqsrlhsygkbijwdlixjlifqywnyesfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031685.8587966-1909-201268176299921/AnsiballZ_systemd_service.py'
Feb 02 11:28:06 compute-0 sudo[240413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:06 compute-0 python3.9[240415]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 11:28:06 compute-0 systemd[1]: Reloading.
Feb 02 11:28:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:06.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:06 compute-0 systemd-rc-local-generator[240438]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:28:06 compute-0 systemd-sysv-generator[240443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:28:06 compute-0 sudo[240413]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:06] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Feb 02 11:28:07 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:06] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Feb 02 11:28:07 compute-0 ceph-mon[74676]: pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:28:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:28:07.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:28:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:28:07.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:28:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:07 compute-0 sudo[240601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvryoskblkwexwwcxxnzlnosieynjwrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031686.9956748-1933-182373355734563/AnsiballZ_command.py'
Feb 02 11:28:07 compute-0 sudo[240601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:07 compute-0 python3.9[240603]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:28:07 compute-0 sudo[240601]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:07 compute-0 sudo[240755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upnbzivjhexivevhfzypncyzfsqoaiml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031687.6273513-1933-219149419270195/AnsiballZ_command.py'
Feb 02 11:28:07 compute-0 sudo[240755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:08 compute-0 python3.9[240757]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:28:08 compute-0 sudo[240755]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:08.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:08.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:08 compute-0 sudo[240908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogdeyydiosbziazqlcwmoplkmxxwmlcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031688.5147245-1933-138754405396343/AnsiballZ_command.py'
Feb 02 11:28:08 compute-0 sudo[240908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:08 compute-0 python3.9[240910]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:28:09 compute-0 sudo[240908]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003c40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:09 compute-0 ceph-mon[74676]: pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:09 compute-0 sudo[241062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-respmvudtirkfdfsjuxkumbhsnfrrgkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031689.1551824-1933-191808678985222/AnsiballZ_command.py'
Feb 02 11:28:09 compute-0 sudo[241062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:09 compute-0 python3.9[241064]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:28:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:09 compute-0 sudo[241062]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:09 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Feb 02 11:28:09 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 02 11:28:09 compute-0 sudo[241218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duoxjnmwhfnevrsfhgkesdbzvuqqfpoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031689.7534277-1933-101261976451857/AnsiballZ_command.py'
Feb 02 11:28:09 compute-0 sudo[241218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:10 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:10 compute-0 python3.9[241220]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:28:10 compute-0 sudo[241218]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:10.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:10.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:10 compute-0 sudo[241373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfhhqybalosvugdmcsnhccbldqrphisg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031690.3181627-1933-153477505239248/AnsiballZ_command.py'
Feb 02 11:28:10 compute-0 sudo[241373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:10 compute-0 python3.9[241375]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:28:10 compute-0 sudo[241373]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:11 compute-0 sudo[241527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enxoronsqmxoydgaojedqitdhmtzafgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031690.8697424-1933-17710947422137/AnsiballZ_command.py'
Feb 02 11:28:11 compute-0 sudo[241527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:11 compute-0 ceph-mon[74676]: pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:11 compute-0 python3.9[241529]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:28:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:11 compute-0 sudo[241527]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:11 compute-0 sshd-session[241221]: Invalid user mapr from 80.94.92.186 port 53028
Feb 02 11:28:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003c40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:11 compute-0 sudo[241681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eursbockknqsodmlrlyusrvyemtltixs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031691.457237-1933-173302974964282/AnsiballZ_command.py'
Feb 02 11:28:11 compute-0 sudo[241681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:11 compute-0 sshd-session[241221]: Connection closed by invalid user mapr 80.94.92.186 port 53028 [preauth]
Feb 02 11:28:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:28:11 compute-0 python3.9[241683]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 11:28:11 compute-0 sudo[241681]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:12 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:28:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:12.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:28:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:12 compute-0 sudo[241709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:28:12 compute-0 sudo[241709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:12 compute-0 sudo[241709]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:12 compute-0 sudo[241735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:28:12 compute-0 sudo[241735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:13 compute-0 ceph-mon[74676]: pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:28:13 compute-0 sudo[241936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evibjjvzasozibesfimzticlfanxutcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031693.025038-2140-38442622945813/AnsiballZ_file.py'
Feb 02 11:28:13 compute-0 sudo[241936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:13 compute-0 podman[241958]: 2026-02-02 11:28:13.416271097 +0000 UTC m=+0.064069019 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:28:13 compute-0 python3.9[241943]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:13 compute-0 sudo[241936]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:13 compute-0 podman[241958]: 2026-02-02 11:28:13.542194288 +0000 UTC m=+0.189992210 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 11:28:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:13 compute-0 sudo[242210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opzikcvloirikvqxvpjtodmxrmdikkbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031693.6435874-2140-98287680140116/AnsiballZ_file.py'
Feb 02 11:28:13 compute-0 sudo[242210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:14 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420004950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:14 compute-0 podman[242242]: 2026-02-02 11:28:14.066068725 +0000 UTC m=+0.056165192 container exec 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:28:14 compute-0 python3.9[242216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:14 compute-0 podman[242266]: 2026-02-02 11:28:14.142972031 +0000 UTC m=+0.058199341 container exec_died 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:28:14 compute-0 sudo[242210]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:14 compute-0 podman[242242]: 2026-02-02 11:28:14.149868238 +0000 UTC m=+0.139964675 container exec_died 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:28:14 compute-0 podman[242390]: 2026-02-02 11:28:14.383937251 +0000 UTC m=+0.053234028 container exec 09770d2aca4e2956b43b09f6ef9373e46a77f2ba1e7a0de8aca5e3173880959b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:28:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:14.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:14 compute-0 podman[242390]: 2026-02-02 11:28:14.421400036 +0000 UTC m=+0.090696803 container exec_died 09770d2aca4e2956b43b09f6ef9373e46a77f2ba1e7a0de8aca5e3173880959b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:28:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:14 compute-0 sudo[242508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqvofguctlunfxtbpaevyebtaayzivpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031694.267755-2140-145165010082481/AnsiballZ_file.py'
Feb 02 11:28:14 compute-0 sudo[242508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:28:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:28:14 compute-0 podman[242532]: 2026-02-02 11:28:14.627844907 +0000 UTC m=+0.055840892 container exec 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:28:14 compute-0 podman[242532]: 2026-02-02 11:28:14.637139604 +0000 UTC m=+0.065135599 container exec_died 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:28:14 compute-0 python3.9[242515]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:14 compute-0 sudo[242508]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:14 compute-0 podman[242599]: 2026-02-02 11:28:14.843242246 +0000 UTC m=+0.053837016 container exec 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-type=git, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2)
Feb 02 11:28:14 compute-0 podman[242638]: 2026-02-02 11:28:14.912977806 +0000 UTC m=+0.051412786 container exec_died 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.tags=Ceph keepalived, name=keepalived, distribution-scope=public)
Feb 02 11:28:14 compute-0 podman[242599]: 2026-02-02 11:28:14.918667219 +0000 UTC m=+0.129261969 container exec_died 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-type=git, release=1793)
Feb 02 11:28:15 compute-0 podman[242763]: 2026-02-02 11:28:15.104952252 +0000 UTC m=+0.049554792 container exec ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:28:15 compute-0 sudo[242833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzbyoyckmskgwlibpwqqeowwcpcpcqgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031694.895967-2206-111646197089597/AnsiballZ_file.py'
Feb 02 11:28:15 compute-0 sudo[242833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:15 compute-0 podman[242763]: 2026-02-02 11:28:15.136281401 +0000 UTC m=+0.080883891 container exec_died ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:28:15 compute-0 ceph-mon[74676]: pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:28:15 compute-0 python3.9[242835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:15 compute-0 podman[242880]: 2026-02-02 11:28:15.337875353 +0000 UTC m=+0.054484444 container exec 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:28:15 compute-0 sudo[242833]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:15 compute-0 podman[242880]: 2026-02-02 11:28:15.517248018 +0000 UTC m=+0.233857109 container exec_died 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:28:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:15 compute-0 sudo[243106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxifryrdgbthnileeudmscetzzsglvct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031695.4633458-2206-63557234182775/AnsiballZ_file.py'
Feb 02 11:28:15 compute-0 sudo[243106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:28:15 compute-0 podman[243141]: 2026-02-02 11:28:15.866186227 +0000 UTC m=+0.054357190 container exec 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:28:15 compute-0 python3.9[243110]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:15 compute-0 podman[243141]: 2026-02-02 11:28:15.90429615 +0000 UTC m=+0.092467113 container exec_died 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:28:15 compute-0 sudo[243106]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:15 compute-0 sudo[241735]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:28:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:16 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:28:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:16 compute-0 sudo[243219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:28:16 compute-0 sudo[243219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:16 compute-0 sudo[243219]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:16 compute-0 sudo[243267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:28:16 compute-0 sudo[243267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:16 compute-0 sudo[243382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyapepoamhcmwxnnlzdllrohdicfcqst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031696.0540018-2206-137956652385156/AnsiballZ_file.py'
Feb 02 11:28:16 compute-0 sudo[243382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:16.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:16 compute-0 python3.9[243384]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:16.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:16 compute-0 sudo[243382]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:16 compute-0 sudo[243267]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:28:16 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:28:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:28:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:28:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:28:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:28:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:28:16 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:28:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:28:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:28:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:28:16 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:28:16 compute-0 sudo[243441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:28:16 compute-0 sudo[243441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:16 compute-0 sudo[243441]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:16 compute-0 sudo[243489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:28:16 compute-0 sudo[243489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:16 compute-0 sudo[243617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeskluqtarkttxhxcqnrbkznordjuxad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031696.6404552-2206-16377017045876/AnsiballZ_file.py'
Feb 02 11:28:16 compute-0 sudo[243617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:16] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:28:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:16] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:28:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:28:17.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:28:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420004950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:17 compute-0 python3.9[243619]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:17 compute-0 podman[243661]: 2026-02-02 11:28:17.059574417 +0000 UTC m=+0.021511368 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:28:17 compute-0 sudo[243617]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:17 compute-0 podman[243661]: 2026-02-02 11:28:17.374884331 +0000 UTC m=+0.336821252 container create 97cd25c6a19e6ab36eff67ac00e23117b0ca32d7984853de01101a2916cd2a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:28:17 compute-0 sudo[243824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yylqwecucblvrdtnmupbzseikwtzmppb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031697.3041587-2206-92205595304074/AnsiballZ_file.py'
Feb 02 11:28:17 compute-0 sudo[243824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:17 compute-0 ceph-mon[74676]: pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:28:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:28:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:28:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:28:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:28:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:28:17 compute-0 systemd[1]: Started libpod-conmon-97cd25c6a19e6ab36eff67ac00e23117b0ca32d7984853de01101a2916cd2a5f.scope.
Feb 02 11:28:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:28:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:17 compute-0 podman[243661]: 2026-02-02 11:28:17.621874406 +0000 UTC m=+0.583811357 container init 97cd25c6a19e6ab36eff67ac00e23117b0ca32d7984853de01101a2916cd2a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:28:17 compute-0 podman[243661]: 2026-02-02 11:28:17.628062183 +0000 UTC m=+0.589999104 container start 97cd25c6a19e6ab36eff67ac00e23117b0ca32d7984853de01101a2916cd2a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:28:17 compute-0 podman[243661]: 2026-02-02 11:28:17.631410869 +0000 UTC m=+0.593347980 container attach 97cd25c6a19e6ab36eff67ac00e23117b0ca32d7984853de01101a2916cd2a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:28:17 compute-0 objective_payne[243830]: 167 167
Feb 02 11:28:17 compute-0 systemd[1]: libpod-97cd25c6a19e6ab36eff67ac00e23117b0ca32d7984853de01101a2916cd2a5f.scope: Deactivated successfully.
Feb 02 11:28:17 compute-0 podman[243661]: 2026-02-02 11:28:17.635872757 +0000 UTC m=+0.597809688 container died 97cd25c6a19e6ab36eff67ac00e23117b0ca32d7984853de01101a2916cd2a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d030b50c5c477f44f9a93f70a825610c0288ae551b5dea5dbeeb8d16115f538b-merged.mount: Deactivated successfully.
Feb 02 11:28:17 compute-0 podman[243661]: 2026-02-02 11:28:17.674532816 +0000 UTC m=+0.636469747 container remove 97cd25c6a19e6ab36eff67ac00e23117b0ca32d7984853de01101a2916cd2a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb 02 11:28:17 compute-0 systemd[1]: libpod-conmon-97cd25c6a19e6ab36eff67ac00e23117b0ca32d7984853de01101a2916cd2a5f.scope: Deactivated successfully.
Feb 02 11:28:17 compute-0 python3.9[243826]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:17 compute-0 sudo[243824]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:17 compute-0 podman[243855]: 2026-02-02 11:28:17.822692615 +0000 UTC m=+0.044415494 container create 6274d79eb929d732d63875d882591df6471fad13d1ec3c2544ddea8f384abeaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_kirch, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:28:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:17 compute-0 systemd[1]: Started libpod-conmon-6274d79eb929d732d63875d882591df6471fad13d1ec3c2544ddea8f384abeaa.scope.
Feb 02 11:28:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b0e41abf1bf758f682115d1c25aea90f15a8fc290daeeed1ae6274a8e1b6a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b0e41abf1bf758f682115d1c25aea90f15a8fc290daeeed1ae6274a8e1b6a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b0e41abf1bf758f682115d1c25aea90f15a8fc290daeeed1ae6274a8e1b6a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b0e41abf1bf758f682115d1c25aea90f15a8fc290daeeed1ae6274a8e1b6a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b0e41abf1bf758f682115d1c25aea90f15a8fc290daeeed1ae6274a8e1b6a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:17 compute-0 podman[243855]: 2026-02-02 11:28:17.80473235 +0000 UTC m=+0.026455249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:28:17 compute-0 podman[243855]: 2026-02-02 11:28:17.907966911 +0000 UTC m=+0.129689800 container init 6274d79eb929d732d63875d882591df6471fad13d1ec3c2544ddea8f384abeaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:28:17 compute-0 podman[243855]: 2026-02-02 11:28:17.915016683 +0000 UTC m=+0.136739552 container start 6274d79eb929d732d63875d882591df6471fad13d1ec3c2544ddea8f384abeaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb 02 11:28:17 compute-0 podman[243855]: 2026-02-02 11:28:17.918546534 +0000 UTC m=+0.140269403 container attach 6274d79eb929d732d63875d882591df6471fad13d1ec3c2544ddea8f384abeaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:28:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:18 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:18 compute-0 sudo[244030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohtczfwghewqkywuxvfmdbheemmqpko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031697.8969805-2206-263860868824135/AnsiballZ_file.py'
Feb 02 11:28:18 compute-0 sudo[244030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:18 compute-0 objective_kirch[243895]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:28:18 compute-0 objective_kirch[243895]: --> All data devices are unavailable
Feb 02 11:28:18 compute-0 systemd[1]: libpod-6274d79eb929d732d63875d882591df6471fad13d1ec3c2544ddea8f384abeaa.scope: Deactivated successfully.
Feb 02 11:28:18 compute-0 podman[243855]: 2026-02-02 11:28:18.271008664 +0000 UTC m=+0.492731543 container died 6274d79eb929d732d63875d882591df6471fad13d1ec3c2544ddea8f384abeaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 11:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-56b0e41abf1bf758f682115d1c25aea90f15a8fc290daeeed1ae6274a8e1b6a2-merged.mount: Deactivated successfully.
Feb 02 11:28:18 compute-0 podman[243855]: 2026-02-02 11:28:18.318054623 +0000 UTC m=+0.539777502 container remove 6274d79eb929d732d63875d882591df6471fad13d1ec3c2544ddea8f384abeaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_kirch, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:28:18 compute-0 systemd[1]: libpod-conmon-6274d79eb929d732d63875d882591df6471fad13d1ec3c2544ddea8f384abeaa.scope: Deactivated successfully.
Feb 02 11:28:18 compute-0 sudo[243489]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:18 compute-0 python3.9[244032]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:18.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:18 compute-0 sudo[244052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:28:18 compute-0 sudo[244052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:18 compute-0 sudo[244052]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:18 compute-0 sudo[244030]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:18 compute-0 sudo[244077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:28:18 compute-0 sudo[244077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:18.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:18 compute-0 ceph-mon[74676]: pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:18 compute-0 sudo[244282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdxoqtilwkqtreasibwagaiekimchqci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031698.5550845-2206-240415926709121/AnsiballZ_file.py'
Feb 02 11:28:18 compute-0 sudo[244282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:18 compute-0 podman[244293]: 2026-02-02 11:28:18.873728922 +0000 UTC m=+0.037148767 container create 56b7261fbf8e3455bbcb1f128fb33c9d8c0014b9c1cb6d6fa2868491b85b618e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_shockley, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb 02 11:28:18 compute-0 systemd[1]: Started libpod-conmon-56b7261fbf8e3455bbcb1f128fb33c9d8c0014b9c1cb6d6fa2868491b85b618e.scope.
Feb 02 11:28:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:28:18 compute-0 podman[244293]: 2026-02-02 11:28:18.857502996 +0000 UTC m=+0.020922871 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:28:18 compute-0 podman[244293]: 2026-02-02 11:28:18.958422311 +0000 UTC m=+0.121842156 container init 56b7261fbf8e3455bbcb1f128fb33c9d8c0014b9c1cb6d6fa2868491b85b618e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_shockley, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:28:18 compute-0 podman[244293]: 2026-02-02 11:28:18.967382228 +0000 UTC m=+0.130802073 container start 56b7261fbf8e3455bbcb1f128fb33c9d8c0014b9c1cb6d6fa2868491b85b618e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_shockley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:28:18 compute-0 podman[244293]: 2026-02-02 11:28:18.97094291 +0000 UTC m=+0.134362765 container attach 56b7261fbf8e3455bbcb1f128fb33c9d8c0014b9c1cb6d6fa2868491b85b618e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:28:18 compute-0 hungry_shockley[244311]: 167 167
Feb 02 11:28:18 compute-0 systemd[1]: libpod-56b7261fbf8e3455bbcb1f128fb33c9d8c0014b9c1cb6d6fa2868491b85b618e.scope: Deactivated successfully.
Feb 02 11:28:18 compute-0 conmon[244311]: conmon 56b7261fbf8e3455bbcb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56b7261fbf8e3455bbcb1f128fb33c9d8c0014b9c1cb6d6fa2868491b85b618e.scope/container/memory.events
Feb 02 11:28:18 compute-0 podman[244293]: 2026-02-02 11:28:18.975321746 +0000 UTC m=+0.138741621 container died 56b7261fbf8e3455bbcb1f128fb33c9d8c0014b9c1cb6d6fa2868491b85b618e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:28:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d83ae7f70e1712882ff6c6051c0f83bff66cf760a0e402d0b08e26d8fc5b1a0c-merged.mount: Deactivated successfully.
Feb 02 11:28:19 compute-0 python3.9[244289]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:19 compute-0 podman[244293]: 2026-02-02 11:28:19.018941117 +0000 UTC m=+0.182360962 container remove 56b7261fbf8e3455bbcb1f128fb33c9d8c0014b9c1cb6d6fa2868491b85b618e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 11:28:19 compute-0 sudo[244282]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:19 compute-0 systemd[1]: libpod-conmon-56b7261fbf8e3455bbcb1f128fb33c9d8c0014b9c1cb6d6fa2868491b85b618e.scope: Deactivated successfully.
Feb 02 11:28:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:19 compute-0 podman[244359]: 2026-02-02 11:28:19.164392579 +0000 UTC m=+0.045348112 container create de3220f092533d3209678a80b4575509e801de0b0252378690a97543e81b0a01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:28:19 compute-0 systemd[1]: Started libpod-conmon-de3220f092533d3209678a80b4575509e801de0b0252378690a97543e81b0a01.scope.
Feb 02 11:28:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:28:19 compute-0 podman[244359]: 2026-02-02 11:28:19.146098754 +0000 UTC m=+0.027054317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065724dc601773d35c7ad0022d903428beb119658d8d8bbf3e922d5f4421805d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065724dc601773d35c7ad0022d903428beb119658d8d8bbf3e922d5f4421805d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065724dc601773d35c7ad0022d903428beb119658d8d8bbf3e922d5f4421805d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065724dc601773d35c7ad0022d903428beb119658d8d8bbf3e922d5f4421805d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:19 compute-0 podman[244359]: 2026-02-02 11:28:19.262851963 +0000 UTC m=+0.143807506 container init de3220f092533d3209678a80b4575509e801de0b0252378690a97543e81b0a01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamport, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:28:19 compute-0 podman[244359]: 2026-02-02 11:28:19.269678659 +0000 UTC m=+0.150634192 container start de3220f092533d3209678a80b4575509e801de0b0252378690a97543e81b0a01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamport, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:28:19 compute-0 podman[244359]: 2026-02-02 11:28:19.274013273 +0000 UTC m=+0.154968826 container attach de3220f092533d3209678a80b4575509e801de0b0252378690a97543e81b0a01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamport, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb 02 11:28:19 compute-0 admiring_lamport[244376]: {
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:     "1": [
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:         {
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "devices": [
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "/dev/loop3"
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             ],
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "lv_name": "ceph_lv0",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "lv_size": "21470642176",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "name": "ceph_lv0",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "tags": {
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.cluster_name": "ceph",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.crush_device_class": "",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.encrypted": "0",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.osd_id": "1",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.type": "block",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.vdo": "0",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:                 "ceph.with_tpm": "0"
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             },
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "type": "block",
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:             "vg_name": "ceph_vg0"
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:         }
Feb 02 11:28:19 compute-0 admiring_lamport[244376]:     ]
Feb 02 11:28:19 compute-0 admiring_lamport[244376]: }
Feb 02 11:28:19 compute-0 systemd[1]: libpod-de3220f092533d3209678a80b4575509e801de0b0252378690a97543e81b0a01.scope: Deactivated successfully.
Feb 02 11:28:19 compute-0 podman[244359]: 2026-02-02 11:28:19.576395886 +0000 UTC m=+0.457351419 container died de3220f092533d3209678a80b4575509e801de0b0252378690a97543e81b0a01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamport, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:28:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-065724dc601773d35c7ad0022d903428beb119658d8d8bbf3e922d5f4421805d-merged.mount: Deactivated successfully.
Feb 02 11:28:19 compute-0 podman[244359]: 2026-02-02 11:28:19.615933551 +0000 UTC m=+0.496889084 container remove de3220f092533d3209678a80b4575509e801de0b0252378690a97543e81b0a01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:28:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420004950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:19 compute-0 systemd[1]: libpod-conmon-de3220f092533d3209678a80b4575509e801de0b0252378690a97543e81b0a01.scope: Deactivated successfully.
Feb 02 11:28:19 compute-0 sudo[244077]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:19 compute-0 sudo[244398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:28:19 compute-0 sudo[244398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:19 compute-0 sudo[244398]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:19 compute-0 sudo[244423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:28:19 compute-0 sudo[244423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:20 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:20 compute-0 podman[244490]: 2026-02-02 11:28:20.169468748 +0000 UTC m=+0.039430112 container create 2b28ead4cdb1035f50bf541d4fe07f9cc202143d339aa0fafbc8670c0a9486e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb 02 11:28:20 compute-0 systemd[1]: Started libpod-conmon-2b28ead4cdb1035f50bf541d4fe07f9cc202143d339aa0fafbc8670c0a9486e1.scope.
Feb 02 11:28:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:28:20 compute-0 podman[244490]: 2026-02-02 11:28:20.248501525 +0000 UTC m=+0.118462909 container init 2b28ead4cdb1035f50bf541d4fe07f9cc202143d339aa0fafbc8670c0a9486e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:28:20 compute-0 podman[244490]: 2026-02-02 11:28:20.154032245 +0000 UTC m=+0.023993639 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:28:20 compute-0 podman[244490]: 2026-02-02 11:28:20.254134286 +0000 UTC m=+0.124095650 container start 2b28ead4cdb1035f50bf541d4fe07f9cc202143d339aa0fafbc8670c0a9486e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hoover, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:28:20 compute-0 podman[244490]: 2026-02-02 11:28:20.257984717 +0000 UTC m=+0.127946101 container attach 2b28ead4cdb1035f50bf541d4fe07f9cc202143d339aa0fafbc8670c0a9486e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:28:20 compute-0 relaxed_hoover[244507]: 167 167
Feb 02 11:28:20 compute-0 systemd[1]: libpod-2b28ead4cdb1035f50bf541d4fe07f9cc202143d339aa0fafbc8670c0a9486e1.scope: Deactivated successfully.
Feb 02 11:28:20 compute-0 conmon[244507]: conmon 2b28ead4cdb1035f50bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b28ead4cdb1035f50bf541d4fe07f9cc202143d339aa0fafbc8670c0a9486e1.scope/container/memory.events
Feb 02 11:28:20 compute-0 podman[244490]: 2026-02-02 11:28:20.260983183 +0000 UTC m=+0.130944547 container died 2b28ead4cdb1035f50bf541d4fe07f9cc202143d339aa0fafbc8670c0a9486e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae2b92ecc38e1029d05c9e28bee62e61835697a6796d9397398419ecb7b9f713-merged.mount: Deactivated successfully.
Feb 02 11:28:20 compute-0 podman[244490]: 2026-02-02 11:28:20.297713016 +0000 UTC m=+0.167674380 container remove 2b28ead4cdb1035f50bf541d4fe07f9cc202143d339aa0fafbc8670c0a9486e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:28:20 compute-0 systemd[1]: libpod-conmon-2b28ead4cdb1035f50bf541d4fe07f9cc202143d339aa0fafbc8670c0a9486e1.scope: Deactivated successfully.
Feb 02 11:28:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:20.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:20 compute-0 podman[244530]: 2026-02-02 11:28:20.439192804 +0000 UTC m=+0.044798736 container create c85c0d0d23e0a2247c8cb865fcc79b0c0cd03fa159c0cc6adb711f2526f047cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb 02 11:28:20 compute-0 systemd[1]: Started libpod-conmon-c85c0d0d23e0a2247c8cb865fcc79b0c0cd03fa159c0cc6adb711f2526f047cb.scope.
Feb 02 11:28:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f4a856f691dcec52c1c3c9fe2ce0e9a25332d407707f0a03bb284afce21784/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:20 compute-0 podman[244530]: 2026-02-02 11:28:20.418383127 +0000 UTC m=+0.023989069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f4a856f691dcec52c1c3c9fe2ce0e9a25332d407707f0a03bb284afce21784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f4a856f691dcec52c1c3c9fe2ce0e9a25332d407707f0a03bb284afce21784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f4a856f691dcec52c1c3c9fe2ce0e9a25332d407707f0a03bb284afce21784/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:28:20 compute-0 podman[244530]: 2026-02-02 11:28:20.526811367 +0000 UTC m=+0.132417319 container init c85c0d0d23e0a2247c8cb865fcc79b0c0cd03fa159c0cc6adb711f2526f047cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:28:20 compute-0 podman[244530]: 2026-02-02 11:28:20.533823649 +0000 UTC m=+0.139429571 container start c85c0d0d23e0a2247c8cb865fcc79b0c0cd03fa159c0cc6adb711f2526f047cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:28:20 compute-0 podman[244530]: 2026-02-02 11:28:20.537474333 +0000 UTC m=+0.143080265 container attach c85c0d0d23e0a2247c8cb865fcc79b0c0cd03fa159c0cc6adb711f2526f047cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:28:20 compute-0 ceph-mon[74676]: pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:21 compute-0 lvm[244621]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:28:21 compute-0 lvm[244621]: VG ceph_vg0 finished
Feb 02 11:28:21 compute-0 magical_mirzakhani[244546]: {}
Feb 02 11:28:21 compute-0 systemd[1]: libpod-c85c0d0d23e0a2247c8cb865fcc79b0c0cd03fa159c0cc6adb711f2526f047cb.scope: Deactivated successfully.
Feb 02 11:28:21 compute-0 systemd[1]: libpod-c85c0d0d23e0a2247c8cb865fcc79b0c0cd03fa159c0cc6adb711f2526f047cb.scope: Consumed 1.077s CPU time.
Feb 02 11:28:21 compute-0 podman[244626]: 2026-02-02 11:28:21.301588221 +0000 UTC m=+0.026696457 container died c85c0d0d23e0a2247c8cb865fcc79b0c0cd03fa159c0cc6adb711f2526f047cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:28:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-27f4a856f691dcec52c1c3c9fe2ce0e9a25332d407707f0a03bb284afce21784-merged.mount: Deactivated successfully.
Feb 02 11:28:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:21 compute-0 podman[244626]: 2026-02-02 11:28:21.341716992 +0000 UTC m=+0.066825218 container remove c85c0d0d23e0a2247c8cb865fcc79b0c0cd03fa159c0cc6adb711f2526f047cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:28:21 compute-0 systemd[1]: libpod-conmon-c85c0d0d23e0a2247c8cb865fcc79b0c0cd03fa159c0cc6adb711f2526f047cb.scope: Deactivated successfully.
Feb 02 11:28:21 compute-0 sudo[244423]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:28:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:28:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:21 compute-0 sudo[244639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:28:21 compute-0 sudo[244639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:21 compute-0 sudo[244639]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:28:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:22 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420004950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:22.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:22 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:22 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:28:22 compute-0 ceph-mon[74676]: pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:28:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:22.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:28:22.661 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:28:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:28:22.662 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:28:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:28:22.662 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:28:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:23 compute-0 sudo[244667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:28:23 compute-0 sudo[244667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:23 compute-0 sudo[244667]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:23 compute-0 podman[244691]: 2026-02-02 11:28:23.870570635 +0000 UTC m=+0.083191937 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 02 11:28:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:24 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:24.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:24.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:24 compute-0 sudo[244844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zveivwbogezlfhqwrmniidjfdcpyhxcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031704.1547394-2531-213076383531323/AnsiballZ_getent.py'
Feb 02 11:28:24 compute-0 sudo[244844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:24 compute-0 python3.9[244846]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb 02 11:28:24 compute-0 sudo[244844]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:24 compute-0 ceph-mon[74676]: pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:25 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420004950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:25 compute-0 podman[244939]: 2026-02-02 11:28:25.257068333 +0000 UTC m=+0.048231855 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 02 11:28:25 compute-0 sudo[245018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewwschuzvlcvlzhoonaktnsxvmruqmso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031704.9414136-2555-223631533409024/AnsiballZ_group.py'
Feb 02 11:28:25 compute-0 sudo[245018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:25 compute-0 python3.9[245020]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 11:28:25 compute-0 groupadd[245022]: group added to /etc/group: name=nova, GID=42436
Feb 02 11:28:25 compute-0 groupadd[245022]: group added to /etc/gshadow: name=nova
Feb 02 11:28:25 compute-0 groupadd[245022]: new group: name=nova, GID=42436
Feb 02 11:28:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:25 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420004950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:25 compute-0 sudo[245018]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:28:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:26 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:26 compute-0 sudo[245177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvzjjdhobqnonxlnluvqbkuplzbzjvrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031705.7874973-2579-166811654886335/AnsiballZ_user.py'
Feb 02 11:28:26 compute-0 sudo[245177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:26.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:26.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:26 compute-0 python3.9[245179]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 02 11:28:26 compute-0 useradd[245181]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Feb 02 11:28:26 compute-0 useradd[245181]: add 'nova' to group 'libvirt'
Feb 02 11:28:26 compute-0 useradd[245181]: add 'nova' to shadow group 'libvirt'
Feb 02 11:28:26 compute-0 sudo[245177]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:26 compute-0 ceph-mon[74676]: pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:28:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:26] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:28:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:26] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:28:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:28:27.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:28:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:27 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:27 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:28 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428009ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Feb 02 11:28:28 compute-0 sshd-session[245215]: Accepted publickey for zuul from 192.168.122.30 port 47046 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:28:28 compute-0 systemd-logind[793]: New session 55 of user zuul.
Feb 02 11:28:28 compute-0 systemd[1]: Started Session 55 of User zuul.
Feb 02 11:28:28 compute-0 sshd-session[245215]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:28:28 compute-0 sshd-session[245218]: Received disconnect from 192.168.122.30 port 47046:11: disconnected by user
Feb 02 11:28:28 compute-0 sshd-session[245218]: Disconnected from user zuul 192.168.122.30 port 47046
Feb 02 11:28:28 compute-0 sshd-session[245215]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:28:28 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Feb 02 11:28:28 compute-0 systemd-logind[793]: Session 55 logged out. Waiting for processes to exit.
Feb 02 11:28:28 compute-0 systemd-logind[793]: Removed session 55.
Feb 02 11:28:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:28.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:28.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:28 compute-0 python3.9[245368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:28:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:29 compute-0 ceph-mon[74676]: pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Feb 02 11:28:29 compute-0 python3.9[245490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031708.445819-2654-109987515522701/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:28:29
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.nfs', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', '.rgw.root', 'volumes']
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:28:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:28:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:28:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:28:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:28:29 compute-0 python3.9[245641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:28:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:30 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Feb 02 11:28:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:28:30 compute-0 python3.9[245717]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:30.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:28:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:30.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:28:30 compute-0 python3.9[245867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:28:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428009ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:31 compute-0 ceph-mon[74676]: pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Feb 02 11:28:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:31 compute-0 python3.9[245989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031710.5314786-2654-112121669599217/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:32 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:32 compute-0 python3.9[246140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:28:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 335 B/s rd, 0 op/s
Feb 02 11:28:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:32.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:32.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:32 compute-0 python3.9[246261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031711.5939388-2654-91658381027538/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:33 compute-0 python3.9[246412]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:28:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:33 compute-0 ceph-mon[74676]: pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 335 B/s rd, 0 op/s
Feb 02 11:28:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:33 compute-0 python3.9[246533]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031712.6544573-2654-147318151584958/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:34 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Feb 02 11:28:34 compute-0 python3.9[246684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:28:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:28:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:34.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:28:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:34 compute-0 ceph-mon[74676]: pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Feb 02 11:28:34 compute-0 python3.9[246805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031713.8013916-2654-19089326816053/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:35 compute-0 sudo[246956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jynnawlepumzqushjwwrrjtumupegafj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031714.983286-2903-157226246821094/AnsiballZ_file.py'
Feb 02 11:28:35 compute-0 sudo[246956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:35 compute-0 python3.9[246958]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:28:35 compute-0 sudo[246956]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:35 compute-0 sudo[247109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgipoxzvmgzfjdrkqayxqedqpivlfsvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031715.6127439-2927-156601602305854/AnsiballZ_copy.py'
Feb 02 11:28:35 compute-0 sudo[247109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:36 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 419 B/s rd, 0 op/s
Feb 02 11:28:36 compute-0 python3.9[247111]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:28:36 compute-0 sudo[247109]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:36.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:36.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:36 compute-0 sudo[247261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdvllsecdkdkexwxuprsopbptlnbcarb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031716.2479334-2951-171911284316681/AnsiballZ_stat.py'
Feb 02 11:28:36 compute-0 sudo[247261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:36 compute-0 python3.9[247263]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:28:36 compute-0 sudo[247261]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:36] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:28:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:36] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:28:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:28:37.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:28:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:28:37.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:28:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:37 compute-0 sudo[247414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdmqtihqdyscifreoesilyqbdheudkth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031716.9300637-2975-219659279432051/AnsiballZ_stat.py'
Feb 02 11:28:37 compute-0 sudo[247414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:37 compute-0 ceph-mon[74676]: pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 419 B/s rd, 0 op/s
Feb 02 11:28:37 compute-0 python3.9[247416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:28:37 compute-0 sudo[247414]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:37 compute-0 sudo[247538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqxzgafxvskojiryctpyuimgjikkzduv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031716.9300637-2975-219659279432051/AnsiballZ_copy.py'
Feb 02 11:28:37 compute-0 sudo[247538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:37 compute-0 python3.9[247540]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1770031716.9300637-2975-219659279432051/.source _original_basename=.1o26m2yc follow=False checksum=745405be814501fd24c95b1e81c38976df32eeb7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb 02 11:28:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:38 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:38 compute-0 sudo[247538]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Feb 02 11:28:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:38.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:38.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:38 compute-0 python3.9[247692]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:28:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:39 compute-0 ceph-mon[74676]: pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Feb 02 11:28:39 compute-0 python3.9[247845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:28:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:39 compute-0 python3.9[247967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031718.9765837-3053-23850387139205/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:40 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:40.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:40 compute-0 ceph-mon[74676]: pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:40.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:40 compute-0 python3.9[248117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 11:28:41 compute-0 python3.9[248238]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031720.113207-3098-73844941298769/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 11:28:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:41 compute-0 sudo[248390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jirouwhwlfdepjptzrqlwlomwusykwiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031721.4646015-3149-63428204050751/AnsiballZ_container_config_data.py'
Feb 02 11:28:41 compute-0 sudo[248390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:42 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:28:42 compute-0 python3.9[248392]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Feb 02 11:28:42 compute-0 sudo[248390]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:42.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:42.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:42 compute-0 sudo[248543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdtbizlxqmhnnspafblmlyiwyomzwrln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031722.5310323-3182-162694938266660/AnsiballZ_container_config_hash.py'
Feb 02 11:28:42 compute-0 sudo[248543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:43 compute-0 python3.9[248545]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 02 11:28:43 compute-0 ceph-mon[74676]: pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:28:43 compute-0 sudo[248543]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:43 compute-0 sudo[248623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:28:43 compute-0 sudo[248623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:28:43 compute-0 sudo[248623]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:44 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:44 compute-0 sudo[248721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifoamxsodwdkiarmvmlijtjsfhkologu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770031723.5568492-3212-31162439820801/AnsiballZ_edpm_container_manage.py'
Feb 02 11:28:44 compute-0 sudo[248721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:28:44 compute-0 python3[248723]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb 02 11:28:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:44.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:44.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:28:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:28:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:45 compute-0 ceph-mon[74676]: pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:28:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:46 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:28:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:46.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:46.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:46] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:28:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:46] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:28:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:28:47.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:28:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:47 compute-0 ceph-mon[74676]: pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:28:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:48 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:48.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:48.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:48 compute-0 ceph-mon[74676]: pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:50 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:50.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:52 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a7d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:28:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:52.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:52.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c0045f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:54 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:54.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:54.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:54 compute-0 ceph-mon[74676]: pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a7d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c0045f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:56 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:28:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:28:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:28:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:28:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:28:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:56.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:28:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:56] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:28:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:28:56] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:28:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:28:57.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:28:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:57 compute-0 podman[248816]: 2026-02-02 11:28:57.246571009 +0000 UTC m=+1.038335178 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 02 11:28:57 compute-0 podman[248804]: 2026-02-02 11:28:57.274366699 +0000 UTC m=+3.065064227 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:28:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800a7d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:58 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:28:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:28:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:28:58.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:28:58 compute-0 podman[248737]: 2026-02-02 11:28:58.47462149 +0000 UTC m=+14.046036744 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 02 11:28:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:28:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000058s ======
Feb 02 11:28:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:28:58.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Feb 02 11:28:58 compute-0 podman[248878]: 2026-02-02 11:28:58.579762057 +0000 UTC m=+0.022469676 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 02 11:28:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:59 compute-0 podman[248878]: 2026-02-02 11:28:59.364472276 +0000 UTC m=+0.807179865 container create 24393538ff4669875d9fc321413b3fd8bfd33a339e6ddccc18f04a62b77d0295 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2)
Feb 02 11:28:59 compute-0 python3[248723]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb 02 11:28:59 compute-0 sudo[248721]: pam_unix(sudo:session): session closed for user root
Feb 02 11:28:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:28:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:28:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:28:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:28:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:28:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:28:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:28:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:28:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:28:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:29:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:00 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420002db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:00.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:00.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:01 compute-0 anacron[30961]: Job `cron.weekly' started
Feb 02 11:29:01 compute-0 anacron[30961]: Job `cron.weekly' terminated
Feb 02 11:29:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:02 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:02 compute-0 ceph-mon[74676]: pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:29:02 compute-0 ceph-mon[74676]: pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:29:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:02.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:29:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:02.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:03 compute-0 sudo[248951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:29:03 compute-0 sudo[248951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:03 compute-0 sudo[248951]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:04 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:29:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:04.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:29:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:04.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:04 compute-0 ceph-mon[74676]: pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:29:04 compute-0 ceph-mon[74676]: pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:29:04 compute-0 ceph-mon[74676]: pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:04 compute-0 ceph-mon[74676]: pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420002130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112905 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:29:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:06 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:29:06 compute-0 ceph-mon[74676]: pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:06 compute-0 sudo[249103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gamdmvmmwtofomrrioysvprbyfcyswdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031746.1385014-3236-117974961674109/AnsiballZ_stat.py'
Feb 02 11:29:06 compute-0 sudo[249103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:06.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000058s ======
Feb 02 11:29:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:06.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Feb 02 11:29:06 compute-0 python3.9[249105]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:29:06 compute-0 sudo[249103]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:06] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:29:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:06] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:29:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:29:07.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:29:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:07 compute-0 ceph-mon[74676]: pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:29:07 compute-0 sudo[249259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfeidfskwsezvxxlqkuaokbxvyuwvjle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031747.302744-3272-64237463757288/AnsiballZ_container_config_data.py'
Feb 02 11:29:07 compute-0 sudo[249259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:07 compute-0 python3.9[249261]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Feb 02 11:29:07 compute-0 sudo[249259]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:08 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:08.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:08 compute-0 sudo[249411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aulmimnmirxdtkldcsawynqudiqfpxwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031748.2827375-3305-263614700136103/AnsiballZ_container_config_hash.py'
Feb 02 11:29:08 compute-0 sudo[249411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:08.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:08 compute-0 python3.9[249413]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 02 11:29:08 compute-0 sudo[249411]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:09 compute-0 ceph-mon[74676]: pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:09 compute-0 sudo[249564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwxzohqhelpditnqfqdyzxcalhfxknxm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770031749.2064345-3335-180463495824405/AnsiballZ_edpm_container_manage.py'
Feb 02 11:29:09 compute-0 sudo[249564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:09 compute-0 python3[249566]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb 02 11:29:09 compute-0 podman[249607]: 2026-02-02 11:29:09.899437124 +0000 UTC m=+0.021045055 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 02 11:29:10 compute-0 podman[249607]: 2026-02-02 11:29:10.013500741 +0000 UTC m=+0.135108652 container create afa5278ccdc1461d3e81efed693cae2d48e0cb476b8968ca088f97422382105d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Feb 02 11:29:10 compute-0 python3[249566]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb 02 11:29:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:10 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:10 compute-0 sudo[249564]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:10 compute-0 sudo[249796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzjmfrelbpsfdeobayvytoylgmriufix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031750.2671163-3359-6145367532710/AnsiballZ_stat.py'
Feb 02 11:29:10 compute-0 sudo[249796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:10.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:10.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:10 compute-0 ceph-mon[74676]: pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:10 compute-0 python3.9[249798]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:29:10 compute-0 sudo[249796]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:11 compute-0 sudo[249951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsrmovbsucviujmxsqwkgaczmjauzvcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031750.9872794-3386-214945781198718/AnsiballZ_file.py'
Feb 02 11:29:11 compute-0 sudo[249951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:11 compute-0 python3.9[249953]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:29:11 compute-0 sudo[249951]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:11 compute-0 sudo[250103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-encgxffrinioeysotwmmusdmhsaumdpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031751.5038626-3386-97639540685423/AnsiballZ_copy.py'
Feb 02 11:29:11 compute-0 sudo[250103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:12 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404001d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:12 compute-0 python3.9[250105]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031751.5038626-3386-97639540685423/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 11:29:12 compute-0 sudo[250103]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:12 compute-0 sudo[250179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znnvrpstrnwlriwnrzmwtwyxnjnikrfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031751.5038626-3386-97639540685423/AnsiballZ_systemd.py'
Feb 02 11:29:12 compute-0 sudo[250179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.003000087s ======
Feb 02 11:29:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:12.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000087s
Feb 02 11:29:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:12 compute-0 python3.9[250181]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 11:29:12 compute-0 systemd[1]: Reloading.
Feb 02 11:29:12 compute-0 systemd-sysv-generator[250212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:29:12 compute-0 systemd-rc-local-generator[250209]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:29:13 compute-0 sudo[250179]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:13 compute-0 sudo[250291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqrtaxngcbagxhuiavyxpuyyrhwotcan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031751.5038626-3386-97639540685423/AnsiballZ_systemd.py'
Feb 02 11:29:13 compute-0 sudo[250291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:13 compute-0 ceph-mon[74676]: pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:13 compute-0 python3.9[250293]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 11:29:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:13 compute-0 systemd[1]: Reloading.
Feb 02 11:29:13 compute-0 systemd-rc-local-generator[250322]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 11:29:13 compute-0 systemd-sysv-generator[250326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 11:29:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:14 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:14 compute-0 systemd[1]: Starting nova_compute container...
Feb 02 11:29:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:14 compute-0 podman[250333]: 2026-02-02 11:29:14.226186803 +0000 UTC m=+0.151073438 container init afa5278ccdc1461d3e81efed693cae2d48e0cb476b8968ca088f97422382105d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:29:14 compute-0 podman[250333]: 2026-02-02 11:29:14.231263561 +0000 UTC m=+0.156150166 container start afa5278ccdc1461d3e81efed693cae2d48e0cb476b8968ca088f97422382105d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 11:29:14 compute-0 nova_compute[250349]: + sudo -E kolla_set_configs
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Validating config file
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying service configuration files
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Deleting /etc/ceph
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Creating directory /etc/ceph
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/ceph
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Writing out command to execute
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 02 11:29:14 compute-0 nova_compute[250349]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 02 11:29:14 compute-0 nova_compute[250349]: ++ cat /run_command
Feb 02 11:29:14 compute-0 nova_compute[250349]: + CMD=nova-compute
Feb 02 11:29:14 compute-0 nova_compute[250349]: + ARGS=
Feb 02 11:29:14 compute-0 nova_compute[250349]: + sudo kolla_copy_cacerts
Feb 02 11:29:14 compute-0 nova_compute[250349]: + [[ ! -n '' ]]
Feb 02 11:29:14 compute-0 nova_compute[250349]: + . kolla_extend_start
Feb 02 11:29:14 compute-0 nova_compute[250349]: Running command: 'nova-compute'
Feb 02 11:29:14 compute-0 nova_compute[250349]: + echo 'Running command: '\''nova-compute'\'''
Feb 02 11:29:14 compute-0 nova_compute[250349]: + umask 0022
Feb 02 11:29:14 compute-0 nova_compute[250349]: + exec nova-compute
Feb 02 11:29:14 compute-0 podman[250333]: nova_compute
Feb 02 11:29:14 compute-0 systemd[1]: Started nova_compute container.
Feb 02 11:29:14 compute-0 sudo[250291]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:14.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:14.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:29:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:29:14 compute-0 ceph-mon[74676]: pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:15 compute-0 python3.9[250511]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:29:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:16 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:16 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:29:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:29:16 compute-0 python3.9[250663]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:29:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:16.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:16.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:16] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:29:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:16] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:29:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:29:17.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:29:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:17 compute-0 python3.9[250814]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 11:29:17 compute-0 ceph-mon[74676]: pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:29:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:18 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:18 compute-0 sudo[250965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsfwlfdxijdhprhtpfcbpxlsyoozdnya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031757.5451314-3566-1443500664343/AnsiballZ_podman_container.py'
Feb 02 11:29:18 compute-0 sudo[250965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:18 compute-0 python3.9[250967]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 02 11:29:18 compute-0 ceph-mon[74676]: pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:29:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:29:18 compute-0 sudo[250965]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:18.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:18 compute-0 nova_compute[250349]: 2026-02-02 11:29:18.612 250353 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 11:29:18 compute-0 nova_compute[250349]: 2026-02-02 11:29:18.613 250353 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 11:29:18 compute-0 nova_compute[250349]: 2026-02-02 11:29:18.613 250353 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 11:29:18 compute-0 nova_compute[250349]: 2026-02-02 11:29:18.613 250353 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 02 11:29:18 compute-0 nova_compute[250349]: 2026-02-02 11:29:18.921 250353 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:29:18 compute-0 nova_compute[250349]: 2026-02-02 11:29:18.943 250353 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:29:18 compute-0 nova_compute[250349]: 2026-02-02 11:29:18.943 250353 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 02 11:29:19 compute-0 sudo[251146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjtuhndmqvqtrjefrcbpjdvmusjelwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031758.746113-3590-96117391258416/AnsiballZ_systemd.py'
Feb 02 11:29:19 compute-0 sudo[251146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:19 compute-0 python3.9[251148]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 11:29:19 compute-0 systemd[1]: Stopping nova_compute container...
Feb 02 11:29:19 compute-0 systemd[1]: libpod-afa5278ccdc1461d3e81efed693cae2d48e0cb476b8968ca088f97422382105d.scope: Deactivated successfully.
Feb 02 11:29:19 compute-0 systemd[1]: libpod-afa5278ccdc1461d3e81efed693cae2d48e0cb476b8968ca088f97422382105d.scope: Consumed 2.753s CPU time.
Feb 02 11:29:19 compute-0 podman[251152]: 2026-02-02 11:29:19.526834799 +0000 UTC m=+0.059042944 container died afa5278ccdc1461d3e81efed693cae2d48e0cb476b8968ca088f97422382105d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Feb 02 11:29:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-afa5278ccdc1461d3e81efed693cae2d48e0cb476b8968ca088f97422382105d-userdata-shm.mount: Deactivated successfully.
Feb 02 11:29:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75-merged.mount: Deactivated successfully.
Feb 02 11:29:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:20 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:20.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:20.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:29:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:21 compute-0 sudo[251183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:29:21 compute-0 sudo[251183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:21 compute-0 sudo[251183]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:21 compute-0 sudo[251208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:29:21 compute-0 sudo[251208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:29:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:29:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:21 : epoch 6980899c : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:29:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:22 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:29:22 compute-0 ceph-mon[74676]: pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:29:22 compute-0 podman[251152]: 2026-02-02 11:29:22.14569207 +0000 UTC m=+2.677900215 container cleanup afa5278ccdc1461d3e81efed693cae2d48e0cb476b8968ca088f97422382105d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 02 11:29:22 compute-0 podman[251152]: nova_compute
Feb 02 11:29:22 compute-0 podman[251245]: nova_compute
Feb 02 11:29:22 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 02 11:29:22 compute-0 systemd[1]: Stopped nova_compute container.
Feb 02 11:29:22 compute-0 systemd[1]: Starting nova_compute container...
Feb 02 11:29:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335386f5467ec1ac77d32d1eb3e5014e1a7af3e07cc1bc9cf8d49a45158bde75/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:22 compute-0 sudo[251208]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:29:22 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:29:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:29:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:29:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:29:22 compute-0 podman[251260]: 2026-02-02 11:29:22.42925478 +0000 UTC m=+0.190107876 container init afa5278ccdc1461d3e81efed693cae2d48e0cb476b8968ca088f97422382105d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:29:22 compute-0 podman[251260]: 2026-02-02 11:29:22.435039239 +0000 UTC m=+0.195892335 container start afa5278ccdc1461d3e81efed693cae2d48e0cb476b8968ca088f97422382105d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 02 11:29:22 compute-0 nova_compute[251290]: + sudo -E kolla_set_configs
Feb 02 11:29:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Validating config file
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying service configuration files
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 02 11:29:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Deleting /etc/ceph
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Creating directory /etc/ceph
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/ceph
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Writing out command to execute
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 02 11:29:22 compute-0 nova_compute[251290]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 02 11:29:22 compute-0 nova_compute[251290]: ++ cat /run_command
Feb 02 11:29:22 compute-0 nova_compute[251290]: + CMD=nova-compute
Feb 02 11:29:22 compute-0 nova_compute[251290]: + ARGS=
Feb 02 11:29:22 compute-0 nova_compute[251290]: + sudo kolla_copy_cacerts
Feb 02 11:29:22 compute-0 podman[251260]: nova_compute
Feb 02 11:29:22 compute-0 systemd[1]: Started nova_compute container.
Feb 02 11:29:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:22.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:22 compute-0 nova_compute[251290]: + [[ ! -n '' ]]
Feb 02 11:29:22 compute-0 nova_compute[251290]: + . kolla_extend_start
Feb 02 11:29:22 compute-0 nova_compute[251290]: + echo 'Running command: '\''nova-compute'\'''
Feb 02 11:29:22 compute-0 nova_compute[251290]: Running command: 'nova-compute'
Feb 02 11:29:22 compute-0 nova_compute[251290]: + umask 0022
Feb 02 11:29:22 compute-0 nova_compute[251290]: + exec nova-compute
Feb 02 11:29:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:22.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:22 compute-0 sudo[251146]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:29:22.662 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:29:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:29:22.662 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:29:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:29:22.662 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:29:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:29:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:29:22 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:29:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:29:22 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:29:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:29:22 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:29:22 compute-0 sudo[251366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:29:22 compute-0 sudo[251366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:22 compute-0 sudo[251366]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:22 compute-0 sudo[251404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:29:22 compute-0 sudo[251404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:23 compute-0 sudo[251502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnijwwrtubvkcknoznmaeaabztkdlvpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770031762.7899036-3617-88919825726872/AnsiballZ_podman_container.py'
Feb 02 11:29:23 compute-0 sudo[251502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:29:23 compute-0 ceph-mon[74676]: pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:29:23 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:29:23 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:29:23 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:29:23 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:29:23 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:29:23 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:29:23 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:29:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:23 compute-0 podman[251542]: 2026-02-02 11:29:23.342093177 +0000 UTC m=+0.053768779 container create 75c906cdb0fd46ca15534452f5a437eb0ac78673753976e10640890ed6e45481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:29:23 compute-0 python3.9[251504]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 02 11:29:23 compute-0 systemd[1]: Started libpod-conmon-75c906cdb0fd46ca15534452f5a437eb0ac78673753976e10640890ed6e45481.scope.
Feb 02 11:29:23 compute-0 podman[251542]: 2026-02-02 11:29:23.311621408 +0000 UTC m=+0.023297030 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:29:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:29:23 compute-0 podman[251542]: 2026-02-02 11:29:23.428458246 +0000 UTC m=+0.140133888 container init 75c906cdb0fd46ca15534452f5a437eb0ac78673753976e10640890ed6e45481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:29:23 compute-0 podman[251542]: 2026-02-02 11:29:23.437409978 +0000 UTC m=+0.149085580 container start 75c906cdb0fd46ca15534452f5a437eb0ac78673753976e10640890ed6e45481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:29:23 compute-0 podman[251542]: 2026-02-02 11:29:23.441821196 +0000 UTC m=+0.153496828 container attach 75c906cdb0fd46ca15534452f5a437eb0ac78673753976e10640890ed6e45481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:29:23 compute-0 nervous_rosalind[251559]: 167 167
Feb 02 11:29:23 compute-0 systemd[1]: libpod-75c906cdb0fd46ca15534452f5a437eb0ac78673753976e10640890ed6e45481.scope: Deactivated successfully.
Feb 02 11:29:23 compute-0 podman[251542]: 2026-02-02 11:29:23.445186674 +0000 UTC m=+0.156862276 container died 75c906cdb0fd46ca15534452f5a437eb0ac78673753976e10640890ed6e45481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb 02 11:29:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cd7d9d8c5f7427a743eac8e574756dd5f324a8dd5d56d68c62a1432de9847e9-merged.mount: Deactivated successfully.
Feb 02 11:29:23 compute-0 podman[251542]: 2026-02-02 11:29:23.533331686 +0000 UTC m=+0.245007288 container remove 75c906cdb0fd46ca15534452f5a437eb0ac78673753976e10640890ed6e45481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:29:23 compute-0 systemd[1]: libpod-conmon-75c906cdb0fd46ca15534452f5a437eb0ac78673753976e10640890ed6e45481.scope: Deactivated successfully.
Feb 02 11:29:23 compute-0 systemd[1]: Started libpod-conmon-24393538ff4669875d9fc321413b3fd8bfd33a339e6ddccc18f04a62b77d0295.scope.
Feb 02 11:29:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf50583d8a298261fd60f0aa898859f9cf4873cb9816e205a49b4a629708d4b/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf50583d8a298261fd60f0aa898859f9cf4873cb9816e205a49b4a629708d4b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf50583d8a298261fd60f0aa898859f9cf4873cb9816e205a49b4a629708d4b/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:23 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:23 compute-0 podman[251600]: 2026-02-02 11:29:23.699718999 +0000 UTC m=+0.214695244 container init 24393538ff4669875d9fc321413b3fd8bfd33a339e6ddccc18f04a62b77d0295 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:29:23 compute-0 podman[251600]: 2026-02-02 11:29:23.704157318 +0000 UTC m=+0.219133533 container start 24393538ff4669875d9fc321413b3fd8bfd33a339e6ddccc18f04a62b77d0295 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm)
Feb 02 11:29:23 compute-0 python3.9[251504]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Applying nova statedir ownership
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 02 11:29:23 compute-0 nova_compute_init[251639]: INFO:nova_statedir:Nova statedir ownership complete
Feb 02 11:29:23 compute-0 systemd[1]: libpod-24393538ff4669875d9fc321413b3fd8bfd33a339e6ddccc18f04a62b77d0295.scope: Deactivated successfully.
Feb 02 11:29:23 compute-0 podman[251626]: 2026-02-02 11:29:23.802301261 +0000 UTC m=+0.156610819 container create bce90c234b55c513179f04fb6335fba56d58377729944f236dabbc7dd3ba8047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_joliot, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:29:23 compute-0 podman[251626]: 2026-02-02 11:29:23.708781083 +0000 UTC m=+0.063090661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:29:23 compute-0 podman[251651]: 2026-02-02 11:29:23.857035908 +0000 UTC m=+0.079240403 container died 24393538ff4669875d9fc321413b3fd8bfd33a339e6ddccc18f04a62b77d0295 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, org.label-schema.build-date=20260127, tcib_managed=true, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:29:23 compute-0 systemd[1]: Started libpod-conmon-bce90c234b55c513179f04fb6335fba56d58377729944f236dabbc7dd3ba8047.scope.
Feb 02 11:29:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c9364e183e3be35e90c953486aea3ccacfe2922be8c94b4874d0d46f97b264/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c9364e183e3be35e90c953486aea3ccacfe2922be8c94b4874d0d46f97b264/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c9364e183e3be35e90c953486aea3ccacfe2922be8c94b4874d0d46f97b264/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c9364e183e3be35e90c953486aea3ccacfe2922be8c94b4874d0d46f97b264/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c9364e183e3be35e90c953486aea3ccacfe2922be8c94b4874d0d46f97b264/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:23 compute-0 sudo[251675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:29:23 compute-0 sudo[251675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:23 compute-0 sudo[251675]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:24 compute-0 podman[251626]: 2026-02-02 11:29:24.04807363 +0000 UTC m=+0.402383208 container init bce90c234b55c513179f04fb6335fba56d58377729944f236dabbc7dd3ba8047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_joliot, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:29:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:24 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:24 compute-0 podman[251626]: 2026-02-02 11:29:24.058546826 +0000 UTC m=+0.412856384 container start bce90c234b55c513179f04fb6335fba56d58377729944f236dabbc7dd3ba8047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:29:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:29:24 compute-0 podman[251626]: 2026-02-02 11:29:24.196673045 +0000 UTC m=+0.550982633 container attach bce90c234b55c513179f04fb6335fba56d58377729944f236dabbc7dd3ba8047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:29:24 compute-0 podman[251651]: 2026-02-02 11:29:24.206936654 +0000 UTC m=+0.429141129 container cleanup 24393538ff4669875d9fc321413b3fd8bfd33a339e6ddccc18f04a62b77d0295 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 11:29:24 compute-0 systemd[1]: libpod-conmon-24393538ff4669875d9fc321413b3fd8bfd33a339e6ddccc18f04a62b77d0295.scope: Deactivated successfully.
Feb 02 11:29:24 compute-0 sudo[251502]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaf50583d8a298261fd60f0aa898859f9cf4873cb9816e205a49b4a629708d4b-merged.mount: Deactivated successfully.
Feb 02 11:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-24393538ff4669875d9fc321413b3fd8bfd33a339e6ddccc18f04a62b77d0295-userdata-shm.mount: Deactivated successfully.
Feb 02 11:29:24 compute-0 bold_joliot[251670]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:29:24 compute-0 bold_joliot[251670]: --> All data devices are unavailable
Feb 02 11:29:24 compute-0 systemd[1]: libpod-bce90c234b55c513179f04fb6335fba56d58377729944f236dabbc7dd3ba8047.scope: Deactivated successfully.
Feb 02 11:29:24 compute-0 podman[251626]: 2026-02-02 11:29:24.441212708 +0000 UTC m=+0.795522266 container died bce90c234b55c513179f04fb6335fba56d58377729944f236dabbc7dd3ba8047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb 02 11:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5c9364e183e3be35e90c953486aea3ccacfe2922be8c94b4874d0d46f97b264-merged.mount: Deactivated successfully.
Feb 02 11:29:24 compute-0 podman[251626]: 2026-02-02 11:29:24.509179161 +0000 UTC m=+0.863488719 container remove bce90c234b55c513179f04fb6335fba56d58377729944f236dabbc7dd3ba8047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:29:24 compute-0 systemd[1]: libpod-conmon-bce90c234b55c513179f04fb6335fba56d58377729944f236dabbc7dd3ba8047.scope: Deactivated successfully.
Feb 02 11:29:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:29:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:24.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:29:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:24.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:24 compute-0 sudo[251404]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:24 compute-0 sudo[251755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:29:24 compute-0 sudo[251755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:24 compute-0 sudo[251755]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:24 compute-0 sudo[251781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:29:24 compute-0 sudo[251781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:24 compute-0 nova_compute[251290]: 2026-02-02 11:29:24.773 251294 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 11:29:24 compute-0 nova_compute[251290]: 2026-02-02 11:29:24.774 251294 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 11:29:24 compute-0 nova_compute[251290]: 2026-02-02 11:29:24.774 251294 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 11:29:24 compute-0 nova_compute[251290]: 2026-02-02 11:29:24.775 251294 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 02 11:29:24 compute-0 nova_compute[251290]: 2026-02-02 11:29:24.989 251294 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:29:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:24 : epoch 6980899c : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.007 251294 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.007 251294 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 02 11:29:25 compute-0 podman[251851]: 2026-02-02 11:29:25.044931878 +0000 UTC m=+0.024592818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:29:25 compute-0 podman[251851]: 2026-02-02 11:29:25.153163015 +0000 UTC m=+0.132823935 container create e0fff8a203d5dc1c4ba331ab4efcc57d10c51da12c9b45cf9ffa39c6eaab3daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:29:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:25 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:25 compute-0 systemd[1]: Started libpod-conmon-e0fff8a203d5dc1c4ba331ab4efcc57d10c51da12c9b45cf9ffa39c6eaab3daa.scope.
Feb 02 11:29:25 compute-0 ceph-mon[74676]: pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:29:25 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3189920706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:29:25 compute-0 sshd-session[225658]: Connection closed by 192.168.122.30 port 52372
Feb 02 11:29:25 compute-0 sshd-session[225655]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:29:25 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Feb 02 11:29:25 compute-0 systemd-logind[793]: Session 54 logged out. Waiting for processes to exit.
Feb 02 11:29:25 compute-0 systemd[1]: session-54.scope: Consumed 1min 55.924s CPU time.
Feb 02 11:29:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:29:25 compute-0 systemd-logind[793]: Removed session 54.
Feb 02 11:29:25 compute-0 podman[251851]: 2026-02-02 11:29:25.301381719 +0000 UTC m=+0.281042669 container init e0fff8a203d5dc1c4ba331ab4efcc57d10c51da12c9b45cf9ffa39c6eaab3daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:29:25 compute-0 podman[251851]: 2026-02-02 11:29:25.310146085 +0000 UTC m=+0.289807005 container start e0fff8a203d5dc1c4ba331ab4efcc57d10c51da12c9b45cf9ffa39c6eaab3daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_ishizaka, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:29:25 compute-0 romantic_ishizaka[251869]: 167 167
Feb 02 11:29:25 compute-0 podman[251851]: 2026-02-02 11:29:25.316060637 +0000 UTC m=+0.295721587 container attach e0fff8a203d5dc1c4ba331ab4efcc57d10c51da12c9b45cf9ffa39c6eaab3daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:29:25 compute-0 systemd[1]: libpod-e0fff8a203d5dc1c4ba331ab4efcc57d10c51da12c9b45cf9ffa39c6eaab3daa.scope: Deactivated successfully.
Feb 02 11:29:25 compute-0 conmon[251869]: conmon e0fff8a203d5dc1c4ba3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0fff8a203d5dc1c4ba331ab4efcc57d10c51da12c9b45cf9ffa39c6eaab3daa.scope/container/memory.events
Feb 02 11:29:25 compute-0 podman[251851]: 2026-02-02 11:29:25.318263071 +0000 UTC m=+0.297924001 container died e0fff8a203d5dc1c4ba331ab4efcc57d10c51da12c9b45cf9ffa39c6eaab3daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_ishizaka, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8f0f10de385ff48d80b59b3b04885110543b0972ed598e7bcf02798e8c21e20-merged.mount: Deactivated successfully.
Feb 02 11:29:25 compute-0 podman[251851]: 2026-02-02 11:29:25.377912321 +0000 UTC m=+0.357573241 container remove e0fff8a203d5dc1c4ba331ab4efcc57d10c51da12c9b45cf9ffa39c6eaab3daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_ishizaka, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:29:25 compute-0 systemd[1]: libpod-conmon-e0fff8a203d5dc1c4ba331ab4efcc57d10c51da12c9b45cf9ffa39c6eaab3daa.scope: Deactivated successfully.
Feb 02 11:29:25 compute-0 podman[251895]: 2026-02-02 11:29:25.543383108 +0000 UTC m=+0.063275737 container create 107e20f320deb8b54407da6356c8ad3e417878bc632a818fce61f1da0543a7ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.565 251294 INFO nova.virt.driver [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 02 11:29:25 compute-0 systemd[1]: Started libpod-conmon-107e20f320deb8b54407da6356c8ad3e417878bc632a818fce61f1da0543a7ca.scope.
Feb 02 11:29:25 compute-0 podman[251895]: 2026-02-02 11:29:25.505946966 +0000 UTC m=+0.025839615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:29:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f5436dbc8fd8b8b4aaf3655be23b97df256de6f2b5edba42f0dfe147a44735/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f5436dbc8fd8b8b4aaf3655be23b97df256de6f2b5edba42f0dfe147a44735/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f5436dbc8fd8b8b4aaf3655be23b97df256de6f2b5edba42f0dfe147a44735/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f5436dbc8fd8b8b4aaf3655be23b97df256de6f2b5edba42f0dfe147a44735/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:25 compute-0 podman[251895]: 2026-02-02 11:29:25.650107691 +0000 UTC m=+0.170000340 container init 107e20f320deb8b54407da6356c8ad3e417878bc632a818fce61f1da0543a7ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lehmann, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:29:25 compute-0 podman[251895]: 2026-02-02 11:29:25.658934629 +0000 UTC m=+0.178827298 container start 107e20f320deb8b54407da6356c8ad3e417878bc632a818fce61f1da0543a7ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:29:25 compute-0 podman[251895]: 2026-02-02 11:29:25.668629401 +0000 UTC m=+0.188522080 container attach 107e20f320deb8b54407da6356c8ad3e417878bc632a818fce61f1da0543a7ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:29:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:25 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.810 251294 INFO nova.compute.provider_config [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.820 251294 DEBUG oslo_concurrency.lockutils [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.820 251294 DEBUG oslo_concurrency.lockutils [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.821 251294 DEBUG oslo_concurrency.lockutils [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.821 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.821 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.821 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.822 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.822 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.822 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.822 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.822 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.823 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.823 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.823 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.823 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.823 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.824 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.824 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.824 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.824 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.824 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.825 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.825 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.825 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.825 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.825 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.826 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.826 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.826 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.826 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.826 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.827 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.827 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.827 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.827 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.827 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.827 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.828 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.828 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.828 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.828 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.828 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.829 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.829 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.829 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.829 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.829 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.830 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.830 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.830 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.830 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.830 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.831 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.831 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.831 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.831 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.831 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.831 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.832 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.832 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.832 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.832 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.832 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.833 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.833 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.833 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.833 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.833 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.833 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.834 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.834 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.835 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.836 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.836 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.836 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.836 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.836 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.837 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.837 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.837 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.837 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.837 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.838 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.838 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.838 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.838 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.839 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.839 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.839 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.839 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.839 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.840 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.840 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.840 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.840 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.840 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.841 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.841 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.841 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.841 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.841 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.841 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.841 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.842 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.842 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.842 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.842 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.843 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.843 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.843 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.843 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.843 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.844 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.844 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.844 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.844 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.844 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.844 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.844 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.845 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.845 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.845 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.845 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.845 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.853 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.853 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.854 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.854 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.854 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.854 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.854 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.854 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.854 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.855 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.855 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.855 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.855 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.855 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.856 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.856 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.856 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.856 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.856 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.857 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.857 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.857 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.857 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.857 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.857 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.858 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.858 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.858 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.859 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.859 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.859 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.859 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.859 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.860 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.860 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.860 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.860 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.860 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.860 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.861 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.861 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.861 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.861 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.861 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.862 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.862 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.862 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.862 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.862 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.862 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.862 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.863 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.863 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.863 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.863 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.863 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.864 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.864 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.864 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.864 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.864 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.865 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.865 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.865 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.865 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.865 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.865 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.866 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.866 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.866 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.866 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.866 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.866 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.867 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.867 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.867 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.867 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.867 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.868 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.868 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.868 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.868 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.868 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.869 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.869 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.869 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.869 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.869 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.869 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.870 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.870 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.870 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.870 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.870 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.870 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.870 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.871 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.871 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.871 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.871 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.871 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.872 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.872 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.872 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.872 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.872 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.873 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.873 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.873 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.873 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.873 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.873 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.874 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.874 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.874 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.874 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.874 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.874 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.875 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.875 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.875 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.875 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.875 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.875 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.876 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.876 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.876 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.876 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.876 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.876 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.877 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.877 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.877 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.877 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.877 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.878 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.878 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.878 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.878 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.878 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.879 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.879 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.879 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.879 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.879 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.880 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.880 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.880 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.880 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.880 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.881 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.881 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.881 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.881 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.881 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.882 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.882 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.882 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.882 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.882 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.883 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.883 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.883 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.883 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.883 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.884 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.884 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.884 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.884 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.884 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.885 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.885 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.885 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.885 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.885 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.886 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.886 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.886 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.886 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.886 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.887 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.887 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.887 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.887 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.887 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.888 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.888 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.888 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.888 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.888 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.889 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.889 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.889 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.889 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.889 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.890 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.890 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.890 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.890 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.890 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.891 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.891 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.891 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.891 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.891 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.891 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.892 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.892 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.892 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.892 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.892 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.892 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.892 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.893 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.893 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.893 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.893 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.893 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.893 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.893 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.894 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.894 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.894 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.894 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.894 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.894 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.894 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.895 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.895 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.895 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.895 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.895 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.895 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.895 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.896 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.896 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.896 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.896 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.896 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.897 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.897 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.897 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.897 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.897 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.897 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.898 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.898 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.898 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.898 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.898 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.898 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.898 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.899 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.899 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.899 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.899 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.899 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.899 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.900 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.900 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.900 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.900 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.900 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.900 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.900 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.901 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.901 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.901 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.901 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.901 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.901 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.902 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.902 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.902 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.902 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.902 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.902 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.902 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.903 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.903 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.903 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.903 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.903 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.903 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.904 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.904 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.904 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.904 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.904 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.905 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.905 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.905 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.905 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.905 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.905 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.906 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.906 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.906 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.906 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.906 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.907 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.907 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.907 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.907 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.907 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.907 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.908 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.908 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.908 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.908 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.908 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.909 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.909 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.909 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.909 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.909 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.910 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.910 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.910 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.910 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.910 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.910 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.910 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.911 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.911 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.911 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.911 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.912 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.912 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.912 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.912 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.912 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.912 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.913 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.913 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.913 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.913 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.913 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.913 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.914 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.914 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.915 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.915 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.915 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.915 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.915 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.916 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.916 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.916 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.916 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.916 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.917 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.917 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.917 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.917 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.917 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.917 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.918 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.918 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.918 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.918 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.918 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.919 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.919 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.919 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.919 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.919 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.920 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.920 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.920 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.920 251294 WARNING oslo_config.cfg [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 02 11:29:25 compute-0 nova_compute[251290]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 02 11:29:25 compute-0 nova_compute[251290]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 02 11:29:25 compute-0 nova_compute[251290]: and ``live_migration_inbound_addr`` respectively.
Feb 02 11:29:25 compute-0 nova_compute[251290]: ).  Its value may be silently ignored in the future.
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.921 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.921 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.921 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.921 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.921 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.922 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.922 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.922 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.922 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.922 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.922 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.923 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.923 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.923 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.923 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.923 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.923 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.924 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.924 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.rbd_secret_uuid        = 1d33f80b-d6ca-501c-bac7-184379b89279 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.924 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.924 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.924 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.925 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.925 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.925 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.925 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.925 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.925 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.925 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.926 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.926 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.926 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.926 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.926 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.927 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.927 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.927 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.927 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.927 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.927 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.928 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.928 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.928 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.928 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.928 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.929 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.929 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.929 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.929 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.929 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.930 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.930 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.930 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.930 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.930 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.931 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.931 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.931 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.931 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.931 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.931 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.932 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.932 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.932 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.932 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.932 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.933 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.933 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.933 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.933 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.933 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.933 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.934 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.934 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.934 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.934 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.934 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.934 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.934 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.935 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.935 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.935 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.935 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.935 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.935 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.936 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.936 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.936 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.936 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.936 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.936 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.937 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.937 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.937 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.937 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.937 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.937 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.938 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.938 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.938 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.938 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.938 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.939 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.939 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.939 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.939 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.939 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.940 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.940 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.940 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.940 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.940 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.940 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.941 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.941 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.941 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.941 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.941 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.941 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.942 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.942 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.942 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.942 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.942 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.943 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.943 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.943 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.943 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.943 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.944 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.944 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.944 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.944 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.944 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.944 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.945 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.945 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.945 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.945 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.946 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.946 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.946 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.946 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.946 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.947 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.947 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.947 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.947 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.947 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.947 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.947 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.948 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.948 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.948 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.948 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.948 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.949 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.949 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.949 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.949 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.949 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.949 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.949 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.950 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.950 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.950 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.950 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.950 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.950 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.950 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.951 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.951 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.951 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.951 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.951 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.951 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.952 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.952 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.952 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.952 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.952 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.952 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.953 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.953 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.953 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.953 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.953 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.953 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.953 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.954 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.954 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.954 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.954 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.954 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.954 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.955 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.955 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.955 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.955 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.955 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.955 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.956 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.956 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.956 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.956 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.956 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.956 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.957 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.957 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.957 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.957 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.957 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.957 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.958 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.958 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.958 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.958 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.958 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.958 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.959 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.959 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.959 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.959 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.959 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.959 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.960 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.960 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.960 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.960 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.960 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.961 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.961 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.961 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.961 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.961 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.961 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.962 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.962 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.963 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.963 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.963 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.963 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.963 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.964 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.964 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.964 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.964 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.964 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.965 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.965 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.965 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.966 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.966 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.966 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.966 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.966 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.966 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.967 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.967 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.967 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.967 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.967 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.967 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.968 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.968 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.968 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.968 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.968 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.968 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.969 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.969 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.969 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.969 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.969 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.969 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.970 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.970 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.970 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.970 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.970 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.970 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.971 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.971 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.971 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.971 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.971 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.971 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.972 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.972 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.972 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.972 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.973 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.973 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.973 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.973 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.973 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.974 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.974 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.974 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.974 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.974 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.974 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.974 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.975 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.975 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.975 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.975 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.976 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.976 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.976 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.976 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.976 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.976 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.976 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.977 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.977 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.977 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.977 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.977 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.977 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.978 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.978 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.978 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.978 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.978 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.978 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.978 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.979 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.979 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.979 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.979 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.979 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.979 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.979 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.980 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.980 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.980 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.980 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.980 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.980 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.981 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.981 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.981 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.981 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.981 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.981 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.982 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.982 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.982 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.982 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.982 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.982 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.983 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.983 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.983 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.983 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.983 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.984 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.984 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.984 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.984 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.985 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.985 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.985 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.985 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.985 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.986 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.986 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.986 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.986 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.986 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.986 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.987 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.987 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.987 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.987 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.987 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.988 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.988 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.988 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.988 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.988 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.989 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.989 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.989 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.989 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.989 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.989 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.990 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.990 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.990 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.990 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.990 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.991 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.991 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.991 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.991 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.991 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.992 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.992 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.992 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.992 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.992 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.992 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.993 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.993 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.993 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.993 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.993 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.993 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.994 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.994 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.994 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.994 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.994 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.994 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.995 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.995 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.995 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.995 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.995 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.995 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.995 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.996 251294 DEBUG oslo_service.service [None req-c3d0deab-b78d-489a-8f66-e91786e4de4e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 02 11:29:25 compute-0 nova_compute[251290]: 2026-02-02 11:29:25.997 251294 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 02 11:29:26 compute-0 festive_lehmann[251912]: {
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:     "1": [
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:         {
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "devices": [
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "/dev/loop3"
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             ],
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "lv_name": "ceph_lv0",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "lv_size": "21470642176",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "name": "ceph_lv0",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "tags": {
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.cluster_name": "ceph",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.crush_device_class": "",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.encrypted": "0",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.osd_id": "1",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.type": "block",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.vdo": "0",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:                 "ceph.with_tpm": "0"
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             },
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "type": "block",
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:             "vg_name": "ceph_vg0"
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:         }
Feb 02 11:29:26 compute-0 festive_lehmann[251912]:     ]
Feb 02 11:29:26 compute-0 festive_lehmann[251912]: }
Feb 02 11:29:26 compute-0 systemd[1]: libpod-107e20f320deb8b54407da6356c8ad3e417878bc632a818fce61f1da0543a7ca.scope: Deactivated successfully.
Feb 02 11:29:26 compute-0 podman[251895]: 2026-02-02 11:29:26.031917957 +0000 UTC m=+0.551810606 container died 107e20f320deb8b54407da6356c8ad3e417878bc632a818fce61f1da0543a7ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:29:26 compute-0 nova_compute[251290]: 2026-02-02 11:29:26.036 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 02 11:29:26 compute-0 nova_compute[251290]: 2026-02-02 11:29:26.038 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 02 11:29:26 compute-0 nova_compute[251290]: 2026-02-02 11:29:26.038 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 02 11:29:26 compute-0 nova_compute[251290]: 2026-02-02 11:29:26.038 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 02 11:29:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:26 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:26 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Feb 02 11:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9f5436dbc8fd8b8b4aaf3655be23b97df256de6f2b5edba42f0dfe147a44735-merged.mount: Deactivated successfully.
Feb 02 11:29:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:29:26 compute-0 systemd[1]: Started libvirt QEMU daemon.
Feb 02 11:29:26 compute-0 podman[251895]: 2026-02-02 11:29:26.105995358 +0000 UTC m=+0.625887987 container remove 107e20f320deb8b54407da6356c8ad3e417878bc632a818fce61f1da0543a7ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lehmann, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:29:26 compute-0 systemd[1]: libpod-conmon-107e20f320deb8b54407da6356c8ad3e417878bc632a818fce61f1da0543a7ca.scope: Deactivated successfully.
Feb 02 11:29:26 compute-0 nova_compute[251290]: 2026-02-02 11:29:26.133 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f1d646de280> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 02 11:29:26 compute-0 nova_compute[251290]: 2026-02-02 11:29:26.137 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f1d646de280> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 02 11:29:26 compute-0 nova_compute[251290]: 2026-02-02 11:29:26.138 251294 INFO nova.virt.libvirt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Connection event '1' reason 'None'
Feb 02 11:29:26 compute-0 sudo[251781]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:26 compute-0 nova_compute[251290]: 2026-02-02 11:29:26.164 251294 WARNING nova.virt.libvirt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 02 11:29:26 compute-0 nova_compute[251290]: 2026-02-02 11:29:26.165 251294 DEBUG nova.virt.libvirt.volume.mount [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 02 11:29:26 compute-0 sudo[251977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:29:26 compute-0 sudo[251977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:26 compute-0 sudo[251977]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:26 compute-0 sudo[252002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:29:26 compute-0 sudo[252002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:26.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:26.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:26 compute-0 podman[252075]: 2026-02-02 11:29:26.686671516 +0000 UTC m=+0.048258328 container create b63cdfdffc1a86e187f8a7283ea574021fe8db1ac4cda2da14ec56b784971a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:29:26 compute-0 systemd[1]: Started libpod-conmon-b63cdfdffc1a86e187f8a7283ea574021fe8db1ac4cda2da14ec56b784971a07.scope.
Feb 02 11:29:26 compute-0 podman[252075]: 2026-02-02 11:29:26.662821221 +0000 UTC m=+0.024408063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:29:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:29:26 compute-0 podman[252075]: 2026-02-02 11:29:26.778709771 +0000 UTC m=+0.140296603 container init b63cdfdffc1a86e187f8a7283ea574021fe8db1ac4cda2da14ec56b784971a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:29:26 compute-0 podman[252075]: 2026-02-02 11:29:26.787629531 +0000 UTC m=+0.149216343 container start b63cdfdffc1a86e187f8a7283ea574021fe8db1ac4cda2da14ec56b784971a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Feb 02 11:29:26 compute-0 podman[252075]: 2026-02-02 11:29:26.7930532 +0000 UTC m=+0.154640002 container attach b63cdfdffc1a86e187f8a7283ea574021fe8db1ac4cda2da14ec56b784971a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Feb 02 11:29:26 compute-0 amazing_tu[252099]: 167 167
Feb 02 11:29:26 compute-0 systemd[1]: libpod-b63cdfdffc1a86e187f8a7283ea574021fe8db1ac4cda2da14ec56b784971a07.scope: Deactivated successfully.
Feb 02 11:29:26 compute-0 podman[252075]: 2026-02-02 11:29:26.796652124 +0000 UTC m=+0.158238946 container died b63cdfdffc1a86e187f8a7283ea574021fe8db1ac4cda2da14ec56b784971a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec16bf16e0de9bc1d9f3944692b04116888f28a4dfbe46101ce339e0df11eca3-merged.mount: Deactivated successfully.
Feb 02 11:29:26 compute-0 podman[252075]: 2026-02-02 11:29:26.892823 +0000 UTC m=+0.254409812 container remove b63cdfdffc1a86e187f8a7283ea574021fe8db1ac4cda2da14ec56b784971a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tu, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:29:26 compute-0 systemd[1]: libpod-conmon-b63cdfdffc1a86e187f8a7283ea574021fe8db1ac4cda2da14ec56b784971a07.scope: Deactivated successfully.
Feb 02 11:29:27 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:26] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:29:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:26] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:29:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:29:27.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:29:27 compute-0 podman[252123]: 2026-02-02 11:29:27.079720982 +0000 UTC m=+0.083259320 container create 33842d42cd3dcb4df94bc1151af0e5954fac20118d3c5f57d276ef45cce2c451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sutherland, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:29:27 compute-0 podman[252123]: 2026-02-02 11:29:27.024953434 +0000 UTC m=+0.028491792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:29:27 compute-0 systemd[1]: Started libpod-conmon-33842d42cd3dcb4df94bc1151af0e5954fac20118d3c5f57d276ef45cce2c451.scope.
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.164 251294 INFO nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Libvirt host capabilities <capabilities>
Feb 02 11:29:27 compute-0 nova_compute[251290]: 
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <host>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <uuid>f6638e84-7d32-4f67-9114-f32b50ad8ee5</uuid>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <cpu>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <arch>x86_64</arch>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model>EPYC-Rome-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <vendor>AMD</vendor>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <microcode version='16777317'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <signature family='23' model='49' stepping='0'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='x2apic'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='tsc-deadline'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='osxsave'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='hypervisor'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='tsc_adjust'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='spec-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='stibp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='arch-capabilities'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='cmp_legacy'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='topoext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='virt-ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='lbrv'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='tsc-scale'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='vmcb-clean'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='pause-filter'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='pfthreshold'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='svme-addr-chk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='rdctl-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='skip-l1dfl-vmentry'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='mds-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature name='pschange-mc-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <pages unit='KiB' size='4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <pages unit='KiB' size='2048'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <pages unit='KiB' size='1048576'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </cpu>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <power_management>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <suspend_mem/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </power_management>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <iommu support='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <migration_features>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <live/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <uri_transports>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <uri_transport>tcp</uri_transport>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <uri_transport>rdma</uri_transport>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </uri_transports>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </migration_features>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <topology>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <cells num='1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <cell id='0'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:           <memory unit='KiB'>7864296</memory>
Feb 02 11:29:27 compute-0 nova_compute[251290]:           <pages unit='KiB' size='4'>1966074</pages>
Feb 02 11:29:27 compute-0 nova_compute[251290]:           <pages unit='KiB' size='2048'>0</pages>
Feb 02 11:29:27 compute-0 nova_compute[251290]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 02 11:29:27 compute-0 nova_compute[251290]:           <distances>
Feb 02 11:29:27 compute-0 nova_compute[251290]:             <sibling id='0' value='10'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:           </distances>
Feb 02 11:29:27 compute-0 nova_compute[251290]:           <cpus num='8'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:           </cpus>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         </cell>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </cells>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </topology>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <cache>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </cache>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <secmodel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model>selinux</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <doi>0</doi>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </secmodel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <secmodel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model>dac</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <doi>0</doi>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </secmodel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </host>
Feb 02 11:29:27 compute-0 nova_compute[251290]: 
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <guest>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <os_type>hvm</os_type>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <arch name='i686'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <wordsize>32</wordsize>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <domain type='qemu'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <domain type='kvm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </arch>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <features>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <pae/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <nonpae/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <acpi default='on' toggle='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <apic default='on' toggle='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <cpuselection/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <deviceboot/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <disksnapshot default='on' toggle='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <externalSnapshot/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </features>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </guest>
Feb 02 11:29:27 compute-0 nova_compute[251290]: 
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <guest>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <os_type>hvm</os_type>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <arch name='x86_64'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <wordsize>64</wordsize>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <domain type='qemu'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <domain type='kvm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </arch>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <features>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <acpi default='on' toggle='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <apic default='on' toggle='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <cpuselection/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <deviceboot/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <disksnapshot default='on' toggle='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <externalSnapshot/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </features>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </guest>
Feb 02 11:29:27 compute-0 nova_compute[251290]: 
Feb 02 11:29:27 compute-0 nova_compute[251290]: </capabilities>
Feb 02 11:29:27 compute-0 nova_compute[251290]: 
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.174 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 02 11:29:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:29:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:27 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2841e721dfce78e546b54fe8edc2c21e97b64fceb47fa8e2f4bbe9d51ead45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2841e721dfce78e546b54fe8edc2c21e97b64fceb47fa8e2f4bbe9d51ead45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2841e721dfce78e546b54fe8edc2c21e97b64fceb47fa8e2f4bbe9d51ead45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2841e721dfce78e546b54fe8edc2c21e97b64fceb47fa8e2f4bbe9d51ead45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:29:27 compute-0 podman[252123]: 2026-02-02 11:29:27.20101326 +0000 UTC m=+0.204551618 container init 33842d42cd3dcb4df94bc1151af0e5954fac20118d3c5f57d276ef45cce2c451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sutherland, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.206 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 02 11:29:27 compute-0 nova_compute[251290]: <domainCapabilities>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <path>/usr/libexec/qemu-kvm</path>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <domain>kvm</domain>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <arch>i686</arch>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <vcpu max='240'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <iothreads supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <os supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <enum name='firmware'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <loader supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>rom</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pflash</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='readonly'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>yes</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>no</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='secure'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>no</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </loader>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </os>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <cpu>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='host-passthrough' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='hostPassthroughMigratable'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>on</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>off</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='maximum' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='maximumMigratable'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>on</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>off</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='host-model' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <vendor>AMD</vendor>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='x2apic'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc-deadline'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='hypervisor'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc_adjust'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='spec-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='stibp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='cmp_legacy'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='overflow-recov'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='succor'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='amd-ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='virt-ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='lbrv'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc-scale'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='vmcb-clean'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='flushbyasid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='pause-filter'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='pfthreshold'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='svme-addr-chk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='disable' name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='custom' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='ClearwaterForest'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ddpd-u'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sha512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='ClearwaterForest-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ddpd-u'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sha512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Dhyana-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Turin'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vp2intersect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibpb-brtype'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbpb'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='srso-user-kernel-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Turin-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vp2intersect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 podman[252123]: 2026-02-02 11:29:27.211040112 +0000 UTC m=+0.214578450 container start 33842d42cd3dcb4df94bc1151af0e5954fac20118d3c5f57d276ef45cce2c451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibpb-brtype'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbpb'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='srso-user-kernel-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 podman[252123]: 2026-02-02 11:29:27.215788121 +0000 UTC m=+0.219326459 container attach 33842d42cd3dcb4df94bc1151af0e5954fac20118d3c5f57d276ef45cce2c451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sutherland, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-128'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-256'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-128'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-256'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v6'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v7'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='KnightsMill'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4fmaps'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4vnniw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512er'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512pf'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='KnightsMill-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4fmaps'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4vnniw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512er'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512pf'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G4-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tbm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G5-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tbm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='athlon'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='athlon-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='core2duo'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='core2duo-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='coreduo'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='coreduo-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='n270'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='n270-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='phenom'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='phenom-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <memoryBacking supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <enum name='sourceType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>file</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>anonymous</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>memfd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </memoryBacking>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <disk supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='diskDevice'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>disk</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>cdrom</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>floppy</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>lun</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='bus'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>ide</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>fdc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>scsi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>sata</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-non-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <graphics supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vnc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>egl-headless</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dbus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <video supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='modelType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vga</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>cirrus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>none</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>bochs</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>ramfb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </video>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <hostdev supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='mode'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>subsystem</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='startupPolicy'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>default</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>mandatory</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>requisite</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>optional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='subsysType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pci</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>scsi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='capsType'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='pciBackend'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </hostdev>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <rng supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-non-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>random</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>egd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>builtin</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <filesystem supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='driverType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>path</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>handle</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtiofs</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </filesystem>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <tpm supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tpm-tis</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tpm-crb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>emulator</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>external</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendVersion'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>2.0</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </tpm>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <redirdev supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='bus'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </redirdev>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <channel supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pty</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>unix</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </channel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <crypto supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>qemu</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>builtin</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </crypto>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <interface supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>default</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>passt</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <panic supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>isa</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>hyperv</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </panic>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <console supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>null</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pty</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dev</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>file</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pipe</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>stdio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>udp</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tcp</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>unix</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>qemu-vdagent</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dbus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </console>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <features>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <gic supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <vmcoreinfo supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <genid supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <backingStoreInput supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <backup supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <async-teardown supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <s390-pv supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <ps2 supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <tdx supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <sev supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <sgx supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <hyperv supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='features'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>relaxed</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vapic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>spinlocks</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vpindex</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>runtime</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>synic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>stimer</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>reset</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vendor_id</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>frequencies</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>reenlightenment</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tlbflush</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>ipi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>avic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>emsr_bitmap</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>xmm_input</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <defaults>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <spinlocks>4095</spinlocks>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <stimer_direct>on</stimer_direct>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <tlbflush_direct>on</tlbflush_direct>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <tlbflush_extended>on</tlbflush_extended>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </defaults>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </hyperv>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <launchSecurity supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </features>
Feb 02 11:29:27 compute-0 nova_compute[251290]: </domainCapabilities>
Feb 02 11:29:27 compute-0 nova_compute[251290]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.216 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 02 11:29:27 compute-0 nova_compute[251290]: <domainCapabilities>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <path>/usr/libexec/qemu-kvm</path>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <domain>kvm</domain>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <arch>i686</arch>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <vcpu max='4096'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <iothreads supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <os supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <enum name='firmware'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <loader supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>rom</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pflash</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='readonly'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>yes</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>no</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='secure'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>no</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </loader>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </os>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <cpu>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='host-passthrough' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='hostPassthroughMigratable'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>on</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>off</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='maximum' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='maximumMigratable'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>on</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>off</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='host-model' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <vendor>AMD</vendor>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='x2apic'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc-deadline'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='hypervisor'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc_adjust'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='spec-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='stibp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='cmp_legacy'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='overflow-recov'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='succor'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='amd-ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='virt-ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='lbrv'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc-scale'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='vmcb-clean'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='flushbyasid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='pause-filter'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='pfthreshold'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='svme-addr-chk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='disable' name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='custom' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='ClearwaterForest'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ddpd-u'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sha512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='ClearwaterForest-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ddpd-u'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sha512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Dhyana-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Turin'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vp2intersect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibpb-brtype'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbpb'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='srso-user-kernel-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Turin-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vp2intersect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibpb-brtype'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbpb'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='srso-user-kernel-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-128'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-256'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-128'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-256'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 02 11:29:27 compute-0 ceph-mon[74676]: pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1157971169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v6'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v7'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='KnightsMill'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4fmaps'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4vnniw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512er'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512pf'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='KnightsMill-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4fmaps'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4vnniw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512er'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512pf'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G4-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tbm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G5-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tbm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='athlon'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='athlon-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='core2duo'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='core2duo-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='coreduo'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='coreduo-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='n270'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='n270-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='phenom'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='phenom-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <memoryBacking supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <enum name='sourceType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>file</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>anonymous</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>memfd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </memoryBacking>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <disk supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='diskDevice'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>disk</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>cdrom</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>floppy</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>lun</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='bus'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>fdc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>scsi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>sata</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-non-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <graphics supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vnc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>egl-headless</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dbus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <video supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='modelType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vga</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>cirrus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>none</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>bochs</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>ramfb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </video>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <hostdev supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='mode'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>subsystem</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='startupPolicy'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>default</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>mandatory</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>requisite</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>optional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='subsysType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pci</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>scsi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='capsType'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='pciBackend'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </hostdev>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <rng supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-non-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>random</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>egd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>builtin</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <filesystem supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='driverType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>path</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>handle</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtiofs</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </filesystem>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <tpm supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tpm-tis</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tpm-crb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>emulator</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>external</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendVersion'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>2.0</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </tpm>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <redirdev supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='bus'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </redirdev>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <channel supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pty</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>unix</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </channel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <crypto supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>qemu</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>builtin</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </crypto>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <interface supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>default</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>passt</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <panic supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>isa</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>hyperv</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </panic>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <console supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>null</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pty</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dev</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>file</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pipe</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>stdio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>udp</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tcp</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>unix</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>qemu-vdagent</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dbus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </console>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <features>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <gic supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <vmcoreinfo supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <genid supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <backingStoreInput supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <backup supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <async-teardown supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <s390-pv supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <ps2 supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <tdx supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <sev supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <sgx supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <hyperv supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='features'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>relaxed</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vapic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>spinlocks</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vpindex</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>runtime</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>synic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>stimer</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>reset</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vendor_id</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>frequencies</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>reenlightenment</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tlbflush</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>ipi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>avic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>emsr_bitmap</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>xmm_input</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <defaults>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <spinlocks>4095</spinlocks>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <stimer_direct>on</stimer_direct>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <tlbflush_direct>on</tlbflush_direct>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <tlbflush_extended>on</tlbflush_extended>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </defaults>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </hyperv>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <launchSecurity supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </features>
Feb 02 11:29:27 compute-0 nova_compute[251290]: </domainCapabilities>
Feb 02 11:29:27 compute-0 nova_compute[251290]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.266 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.272 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 02 11:29:27 compute-0 nova_compute[251290]: <domainCapabilities>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <path>/usr/libexec/qemu-kvm</path>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <domain>kvm</domain>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <arch>x86_64</arch>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <vcpu max='240'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <iothreads supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <os supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <enum name='firmware'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <loader supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>rom</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pflash</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='readonly'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>yes</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>no</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='secure'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>no</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </loader>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </os>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <cpu>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='host-passthrough' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='hostPassthroughMigratable'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>on</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>off</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='maximum' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='maximumMigratable'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>on</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>off</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='host-model' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <vendor>AMD</vendor>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='x2apic'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc-deadline'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='hypervisor'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc_adjust'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='spec-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='stibp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='cmp_legacy'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='overflow-recov'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='succor'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='amd-ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='virt-ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='lbrv'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc-scale'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='vmcb-clean'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='flushbyasid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='pause-filter'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='pfthreshold'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='svme-addr-chk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='disable' name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='custom' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='ClearwaterForest'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ddpd-u'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sha512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='ClearwaterForest-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ddpd-u'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sha512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Dhyana-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Turin'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vp2intersect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibpb-brtype'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbpb'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='srso-user-kernel-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Turin-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vp2intersect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibpb-brtype'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbpb'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='srso-user-kernel-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-128'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-256'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-128'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-256'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v6'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v7'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='KnightsMill'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4fmaps'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4vnniw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512er'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512pf'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='KnightsMill-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4fmaps'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4vnniw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512er'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512pf'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G4-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tbm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G5-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tbm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='athlon'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='athlon-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='core2duo'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='core2duo-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='coreduo'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='coreduo-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='n270'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='n270-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='phenom'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='phenom-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <memoryBacking supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <enum name='sourceType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>file</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>anonymous</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>memfd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </memoryBacking>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <disk supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='diskDevice'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>disk</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>cdrom</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>floppy</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>lun</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='bus'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>ide</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>fdc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>scsi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>sata</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-non-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <graphics supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vnc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>egl-headless</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dbus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <video supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='modelType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vga</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>cirrus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>none</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>bochs</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>ramfb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </video>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <hostdev supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='mode'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>subsystem</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='startupPolicy'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>default</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>mandatory</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>requisite</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>optional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='subsysType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pci</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>scsi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='capsType'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='pciBackend'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </hostdev>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <rng supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-non-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>random</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>egd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>builtin</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <filesystem supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='driverType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>path</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>handle</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtiofs</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </filesystem>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <tpm supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tpm-tis</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tpm-crb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>emulator</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>external</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendVersion'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>2.0</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </tpm>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <redirdev supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='bus'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </redirdev>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <channel supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pty</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>unix</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </channel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <crypto supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>qemu</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>builtin</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </crypto>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <interface supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>default</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>passt</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <panic supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>isa</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>hyperv</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </panic>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <console supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>null</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pty</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dev</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>file</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pipe</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>stdio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>udp</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tcp</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>unix</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>qemu-vdagent</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dbus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </console>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <features>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <gic supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <vmcoreinfo supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <genid supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <backingStoreInput supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <backup supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <async-teardown supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <s390-pv supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <ps2 supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <tdx supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <sev supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <sgx supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <hyperv supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='features'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>relaxed</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vapic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>spinlocks</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vpindex</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>runtime</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>synic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>stimer</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>reset</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vendor_id</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>frequencies</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>reenlightenment</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tlbflush</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>ipi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>avic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>emsr_bitmap</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>xmm_input</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <defaults>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <spinlocks>4095</spinlocks>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <stimer_direct>on</stimer_direct>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <tlbflush_direct>on</tlbflush_direct>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <tlbflush_extended>on</tlbflush_extended>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </defaults>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </hyperv>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <launchSecurity supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </features>
Feb 02 11:29:27 compute-0 nova_compute[251290]: </domainCapabilities>
Feb 02 11:29:27 compute-0 nova_compute[251290]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.360 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 02 11:29:27 compute-0 nova_compute[251290]: <domainCapabilities>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <path>/usr/libexec/qemu-kvm</path>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <domain>kvm</domain>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <arch>x86_64</arch>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <vcpu max='4096'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <iothreads supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <os supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <enum name='firmware'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>efi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <loader supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>rom</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pflash</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='readonly'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>yes</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>no</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='secure'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>yes</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>no</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </loader>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </os>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <cpu>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='host-passthrough' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='hostPassthroughMigratable'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>on</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>off</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='maximum' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='maximumMigratable'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>on</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>off</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='host-model' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <vendor>AMD</vendor>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='x2apic'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc-deadline'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='hypervisor'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc_adjust'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='spec-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='stibp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='cmp_legacy'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='overflow-recov'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='succor'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='amd-ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='virt-ssbd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='lbrv'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='tsc-scale'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='vmcb-clean'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='flushbyasid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='pause-filter'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='pfthreshold'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='svme-addr-chk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <feature policy='disable' name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <mode name='custom' supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Broadwell-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cascadelake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='ClearwaterForest'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ddpd-u'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sha512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='ClearwaterForest-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ddpd-u'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sha512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm3'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sm4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Cooperlake-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Denverton-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Dhyana-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Genoa-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Milan-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Rome-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Turin'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vp2intersect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibpb-brtype'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbpb'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='srso-user-kernel-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-Turin-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amd-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='auto-ibrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vp2intersect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fs-gs-base-ns'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibpb-brtype'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='no-nested-data-bp'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='null-sel-clr-base'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='perfmon-v2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbpb'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='srso-user-kernel-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='stibp-always-on'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='EPYC-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-128'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-256'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='GraniteRapids-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-128'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-256'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx10-512'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='prefetchiti'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Haswell-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-noTSX'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v6'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Icelake-Server-v7'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='IvyBridge-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='KnightsMill'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4fmaps'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4vnniw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512er'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512pf'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='KnightsMill-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4fmaps'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-4vnniw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512er'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512pf'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G4-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tbm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Opteron_G5-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fma4'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tbm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xop'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SapphireRapids-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='amx-tile'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-bf16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-fp16'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512-vpopcntdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bitalg'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vbmi2'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrc'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fzrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='la57'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='taa-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='tsx-ldtrk'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='SierraForest-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ifma'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-ne-convert'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx-vnni-int8'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bhi-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='bus-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cmpccxadd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fbsdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='fsrs'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ibrs-all'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='intel-psfd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ipred-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='lam'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mcdt-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pbrsb-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='psdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rrsba-ctrl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='sbdr-ssdp-no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='serialize'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vaes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='vpclmulqdq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Client-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='hle'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='rtm'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Skylake-Server-v5'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512bw'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512cd'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512dq'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512f'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='avx512vl'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='invpcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pcid'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='pku'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='mpx'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v2'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v3'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='core-capability'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='split-lock-detect'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='Snowridge-v4'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='cldemote'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='erms'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='gfni'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdir64b'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='movdiri'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='xsaves'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='athlon'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='athlon-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='core2duo'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='core2duo-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='coreduo'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='coreduo-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='n270'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='n270-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='ss'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='phenom'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <blockers model='phenom-v1'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnow'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <feature name='3dnowext'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </blockers>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </mode>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <memoryBacking supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <enum name='sourceType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>file</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>anonymous</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <value>memfd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </memoryBacking>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <disk supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='diskDevice'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>disk</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>cdrom</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>floppy</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>lun</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='bus'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>fdc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>scsi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>sata</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-non-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <graphics supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vnc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>egl-headless</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dbus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <video supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='modelType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vga</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>cirrus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>none</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>bochs</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>ramfb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </video>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <hostdev supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='mode'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>subsystem</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='startupPolicy'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>default</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>mandatory</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>requisite</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>optional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='subsysType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pci</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>scsi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='capsType'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='pciBackend'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </hostdev>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <rng supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtio-non-transitional</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>random</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>egd</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>builtin</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <filesystem supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='driverType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>path</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>handle</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>virtiofs</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </filesystem>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <tpm supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tpm-tis</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tpm-crb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>emulator</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>external</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendVersion'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>2.0</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </tpm>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <redirdev supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='bus'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>usb</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </redirdev>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <channel supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pty</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>unix</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </channel>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <crypto supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>qemu</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendModel'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>builtin</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </crypto>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <interface supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='backendType'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>default</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>passt</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <panic supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='model'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>isa</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>hyperv</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </panic>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <console supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='type'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>null</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vc</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pty</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dev</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>file</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>pipe</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>stdio</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>udp</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tcp</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>unix</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>qemu-vdagent</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>dbus</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </console>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   <features>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <gic supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <vmcoreinfo supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <genid supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <backingStoreInput supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <backup supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <async-teardown supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <s390-pv supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <ps2 supported='yes'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <tdx supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <sev supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <sgx supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <hyperv supported='yes'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <enum name='features'>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>relaxed</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vapic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>spinlocks</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vpindex</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>runtime</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>synic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>stimer</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>reset</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>vendor_id</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>frequencies</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>reenlightenment</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>tlbflush</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>ipi</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>avic</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>emsr_bitmap</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <value>xmm_input</value>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </enum>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       <defaults>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <spinlocks>4095</spinlocks>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <stimer_direct>on</stimer_direct>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <tlbflush_direct>on</tlbflush_direct>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <tlbflush_extended>on</tlbflush_extended>
Feb 02 11:29:27 compute-0 nova_compute[251290]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 02 11:29:27 compute-0 nova_compute[251290]:       </defaults>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     </hyperv>
Feb 02 11:29:27 compute-0 nova_compute[251290]:     <launchSecurity supported='no'/>
Feb 02 11:29:27 compute-0 nova_compute[251290]:   </features>
Feb 02 11:29:27 compute-0 nova_compute[251290]: </domainCapabilities>
Feb 02 11:29:27 compute-0 nova_compute[251290]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.436 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.436 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.436 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.443 251294 INFO nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Secure Boot support detected
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.445 251294 INFO nova.virt.libvirt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.445 251294 INFO nova.virt.libvirt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.613 251294 DEBUG nova.virt.libvirt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.642 251294 INFO nova.virt.node [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Determined node identity 92919e7b-7846-4645-9401-9fd55bbbf435 from /var/lib/nova/compute_id
Feb 02 11:29:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:27 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.696 251294 WARNING nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Compute nodes ['92919e7b-7846-4645-9401-9fd55bbbf435'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.740 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.800 251294 WARNING nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.801 251294 DEBUG oslo_concurrency.lockutils [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.801 251294 DEBUG oslo_concurrency.lockutils [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.802 251294 DEBUG oslo_concurrency.lockutils [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.802 251294 DEBUG nova.compute.resource_tracker [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:29:27 compute-0 nova_compute[251290]: 2026-02-02 11:29:27.803 251294 DEBUG oslo_concurrency.processutils [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:29:27 compute-0 lvm[252218]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:29:27 compute-0 lvm[252218]: VG ceph_vg0 finished
Feb 02 11:29:27 compute-0 optimistic_sutherland[252139]: {}
Feb 02 11:29:28 compute-0 systemd[1]: libpod-33842d42cd3dcb4df94bc1151af0e5954fac20118d3c5f57d276ef45cce2c451.scope: Deactivated successfully.
Feb 02 11:29:28 compute-0 systemd[1]: libpod-33842d42cd3dcb4df94bc1151af0e5954fac20118d3c5f57d276ef45cce2c451.scope: Consumed 1.153s CPU time.
Feb 02 11:29:28 compute-0 podman[252123]: 2026-02-02 11:29:28.007384791 +0000 UTC m=+1.010923139 container died 33842d42cd3dcb4df94bc1151af0e5954fac20118d3c5f57d276ef45cce2c451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sutherland, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:29:28 compute-0 podman[252240]: 2026-02-02 11:29:28.028822846 +0000 UTC m=+0.073960168 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 02 11:29:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e2841e721dfce78e546b54fe8edc2c21e97b64fceb47fa8e2f4bbe9d51ead45-merged.mount: Deactivated successfully.
Feb 02 11:29:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:28 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:28 compute-0 podman[252123]: 2026-02-02 11:29:28.063798697 +0000 UTC m=+1.067337035 container remove 33842d42cd3dcb4df94bc1151af0e5954fac20118d3c5f57d276ef45cce2c451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:29:28 compute-0 podman[252241]: 2026-02-02 11:29:28.065917179 +0000 UTC m=+0.111040511 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:29:28 compute-0 systemd[1]: libpod-conmon-33842d42cd3dcb4df94bc1151af0e5954fac20118d3c5f57d276ef45cce2c451.scope: Deactivated successfully.
Feb 02 11:29:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:29:28 compute-0 sudo[252002]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:29:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:29:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:29:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:29:28 compute-0 sudo[252292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:29:28 compute-0 sudo[252292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:28 compute-0 sudo[252292]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:29:28 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508722354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:29:28 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2222598403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:29:28 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:29:28 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:29:28 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2508722354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.311 251294 DEBUG oslo_concurrency.processutils [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:29:28 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Feb 02 11:29:28 compute-0 systemd[1]: Started libvirt nodedev daemon.
Feb 02 11:29:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:28.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:29:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:28.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.631 251294 WARNING nova.virt.libvirt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.632 251294 DEBUG nova.compute.resource_tracker [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4838MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.633 251294 DEBUG oslo_concurrency.lockutils [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.633 251294 DEBUG oslo_concurrency.lockutils [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.649 251294 WARNING nova.compute.resource_tracker [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] No compute node record for compute-0.ctlplane.example.com:92919e7b-7846-4645-9401-9fd55bbbf435: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 92919e7b-7846-4645-9401-9fd55bbbf435 could not be found.
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.666 251294 INFO nova.compute.resource_tracker [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 92919e7b-7846-4645-9401-9fd55bbbf435
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.720 251294 DEBUG nova.compute.resource_tracker [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.720 251294 DEBUG nova.compute.resource_tracker [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.890 251294 INFO nova.scheduler.client.report [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [req-d8e9e0ee-786c-4586-8a13-cca40bfa4faf] Created resource provider record via placement API for resource provider with UUID 92919e7b-7846-4645-9401-9fd55bbbf435 and name compute-0.ctlplane.example.com.
Feb 02 11:29:28 compute-0 nova_compute[251290]: 2026-02-02 11:29:28.974 251294 DEBUG oslo_concurrency.processutils [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:29:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:29 compute-0 ceph-mon[74676]: pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:29:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/259762945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:29:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:29:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/768398506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.493 251294 DEBUG oslo_concurrency.processutils [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.498 251294 DEBUG nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 02 11:29:29 compute-0 nova_compute[251290]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.498 251294 INFO nova.virt.libvirt.host [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] kernel doesn't support AMD SEV
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.499 251294 DEBUG nova.compute.provider_tree [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.499 251294 DEBUG nova.virt.libvirt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:29:29
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta', '.nfs', 'volumes']
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.548 251294 DEBUG nova.scheduler.client.report [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Updated inventory for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.551 251294 DEBUG nova.compute.provider_tree [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Updating resource provider 92919e7b-7846-4645-9401-9fd55bbbf435 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.551 251294 DEBUG nova.compute.provider_tree [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:29:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:29:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.668 251294 DEBUG nova.compute.provider_tree [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Updating resource provider 92919e7b-7846-4645-9401-9fd55bbbf435 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 02 11:29:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:29 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.695 251294 DEBUG nova.compute.resource_tracker [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.695 251294 DEBUG oslo_concurrency.lockutils [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.696 251294 DEBUG nova.service [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.779 251294 DEBUG nova.service [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 02 11:29:29 compute-0 nova_compute[251290]: 2026-02-02 11:29:29.780 251294 DEBUG nova.servicegroup.drivers.db [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:29:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:29:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/112930 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:29:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:30 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:29:30 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/768398506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:29:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:29:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:29:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:30.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:29:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:30.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:31 compute-0 ceph-mon[74676]: pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:29:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:31 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:32 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:29:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:32.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:32 compute-0 ceph-mon[74676]: pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:29:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:32.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:33 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:34 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Feb 02 11:29:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:34.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:34.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:35 compute-0 ceph-mon[74676]: pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Feb 02 11:29:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:35 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04100023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:36 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 767 B/s wr, 2 op/s
Feb 02 11:29:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:36] "GET /metrics HTTP/1.1" 200 48280 "" "Prometheus/2.51.0"
Feb 02 11:29:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:36] "GET /metrics HTTP/1.1" 200 48280 "" "Prometheus/2.51.0"
Feb 02 11:29:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:29:37.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:29:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:29:37.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:29:37 compute-0 ceph-mon[74676]: pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 767 B/s wr, 2 op/s
Feb 02 11:29:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:37 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:38 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:29:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:38.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:38.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:39 compute-0 ceph-mon[74676]: pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:29:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:39 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:40 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:29:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:40.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:41 compute-0 ceph-mon[74676]: pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:29:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:41 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:42 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:29:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:29:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:42.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:29:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:42.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:43 compute-0 ceph-mon[74676]: pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:29:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:43 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:44 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:44 compute-0 sudo[252382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:29:44 compute-0 sudo[252382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:29:44 compute-0 sudo[252382]: pam_unix(sudo:session): session closed for user root
Feb 02 11:29:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:44.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:44.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:29:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:29:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:45 compute-0 ceph-mon[74676]: pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:29:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:45 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:46 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:29:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:29:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:46.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:29:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:46.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:46] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:29:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:46] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:29:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:29:47.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:29:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:47 compute-0 ceph-mon[74676]: pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:29:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:47 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:48 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:29:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:48.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:29:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:48.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:49 compute-0 ceph-mon[74676]: pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:49 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:49 compute-0 nova_compute[251290]: 2026-02-02 11:29:49.783 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:29:49 compute-0 nova_compute[251290]: 2026-02-02 11:29:49.837 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:29:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:50 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:29:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:50.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:29:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:50.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:51 compute-0 ceph-mon[74676]: pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:51 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:52 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:29:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:52.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:29:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:52.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:29:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:53 compute-0 ceph-mon[74676]: pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:29:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:53 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:54 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:54.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:54.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:55 compute-0 ceph-mon[74676]: pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:55 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:56 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:29:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:29:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:56.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:56.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:56] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:29:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:29:56] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:29:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:29:57.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:29:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:57 compute-0 ceph-mon[74676]: pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:29:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2226577410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:29:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2226577410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:29:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2307394015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:29:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2307394015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:29:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:57 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:58 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:58 compute-0 podman[252421]: 2026-02-02 11:29:58.2996154 +0000 UTC m=+0.086150373 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 11:29:58 compute-0 podman[252422]: 2026-02-02 11:29:58.300576699 +0000 UTC m=+0.087275927 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Feb 02 11:29:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3968203647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:29:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3968203647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:29:58 compute-0 ceph-mon[74676]: pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:29:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:29:58.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:29:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:29:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:29:58.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:29:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:29:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:29:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:29:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:29:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:29:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:29:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:29:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:29:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:29:59 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0404004ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:29:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:30:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [INF] : overall HEALTH_OK
Feb 02 11:30:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:00 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:30:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 4167 writes, 18K keys, 4167 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 4167 writes, 4167 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1455 writes, 5942 keys, 1455 commit groups, 1.0 writes per commit group, ingest: 10.84 MB, 0.02 MB/s
                                           Interval WAL: 1455 writes, 1455 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    124.7      0.24              0.06         8    0.030       0      0       0.0       0.0
                                             L6      1/0   11.32 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0    142.2    118.0      0.75              0.16         7    0.108     32K   3816       0.0       0.0
                                            Sum      1/0   11.32 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    108.2    119.6      0.99              0.22        15    0.066     32K   3816       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3    102.0     97.0      0.48              0.09         6    0.080     16K   2033       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    142.2    118.0      0.75              0.16         7    0.108     32K   3816       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    126.3      0.23              0.06         7    0.033       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.029, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.12 GB write, 0.10 MB/s write, 0.10 GB read, 0.09 MB/s read, 1.0 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594e304b350#2 capacity: 304.00 MB usage: 5.28 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000122 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(330,4.99 MB,1.64094%) FilterBlock(16,99.55 KB,0.0319782%) IndexBlock(16,195.33 KB,0.0627468%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 11:30:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:00.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:00.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:00 compute-0 ceph-mon[74676]: overall HEALTH_OK
Feb 02 11:30:00 compute-0 ceph-mon[74676]: pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.412063) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031801412109, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1208, "num_deletes": 255, "total_data_size": 2207707, "memory_usage": 2246152, "flush_reason": "Manual Compaction"}
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031801434190, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2148704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17819, "largest_seqno": 19025, "table_properties": {"data_size": 2142979, "index_size": 3052, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11820, "raw_average_key_size": 18, "raw_value_size": 2131492, "raw_average_value_size": 3404, "num_data_blocks": 137, "num_entries": 626, "num_filter_entries": 626, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031682, "oldest_key_time": 1770031682, "file_creation_time": 1770031801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 22234 microseconds, and 6394 cpu microseconds.
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.434286) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2148704 bytes OK
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.434371) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.436065) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.436095) EVENT_LOG_v1 {"time_micros": 1770031801436088, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.436126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2202272, prev total WAL file size 2202272, number of live WAL files 2.
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.436882) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2098KB)], [38(11MB)]
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031801437710, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14019457, "oldest_snapshot_seqno": -1}
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4979 keys, 13515557 bytes, temperature: kUnknown
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031801586553, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13515557, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13481008, "index_size": 20999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126531, "raw_average_key_size": 25, "raw_value_size": 13389433, "raw_average_value_size": 2689, "num_data_blocks": 863, "num_entries": 4979, "num_filter_entries": 4979, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770031801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.587375) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13515557 bytes
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.589807) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.0 rd, 90.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.3 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(12.8) write-amplify(6.3) OK, records in: 5503, records dropped: 524 output_compression: NoCompression
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.589864) EVENT_LOG_v1 {"time_micros": 1770031801589843, "job": 18, "event": "compaction_finished", "compaction_time_micros": 149115, "compaction_time_cpu_micros": 26924, "output_level": 6, "num_output_files": 1, "total_output_size": 13515557, "num_input_records": 5503, "num_output_records": 4979, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031801590562, "job": 18, "event": "table_file_deletion", "file_number": 40}
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031801592929, "job": 18, "event": "table_file_deletion", "file_number": 38}
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.436707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.593457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.593466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.593468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.593470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:30:01 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:30:01.593472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:30:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:01 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420003460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:02 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:30:02 compute-0 ceph-mon[74676]: pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:30:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:02.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:02.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:03 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:04 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:04 compute-0 sudo[252475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:30:04 compute-0 sudo[252475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:04 compute-0 sudo[252475]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:04.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:04.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:05 compute-0 ceph-mon[74676]: pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:05 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:06 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:30:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:06.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:06.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:06] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:30:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:06] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:30:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:30:07.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:30:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:30:07.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:30:07 compute-0 ceph-mon[74676]: pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:30:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:07 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:08 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:08.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:08.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0428001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:09 compute-0 ceph-mon[74676]: pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:09 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:10 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:10.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:10.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:11 compute-0 ceph-mon[74676]: pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:11 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:12 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:30:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:12.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:12.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:13 compute-0 ceph-mon[74676]: pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:30:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:13 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800af20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:14 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:30:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:30:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:14.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:30:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:14.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:30:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:15 compute-0 ceph-mon[74676]: pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:30:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:15 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:16 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800af20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:30:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:16.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:16.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:30:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:30:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:30:17.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:30:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f041000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:17 compute-0 ceph-mon[74676]: pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:30:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:17 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:18 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:18.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:18.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:19 compute-0 kernel: ganesha.nfsd[252368]: segfault at 50 ip 00007f04b34b732e sp 00007f04397f9210 error 4 in libntirpc.so.5.8[7f04b349c000+2c000] likely on CPU 3 (core 0, socket 3)
Feb 02 11:30:19 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb 02 11:30:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[208236]: 02/02/2026 11:30:19 : epoch 6980899c : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f042800af20 fd 48 proxy ignored for local
Feb 02 11:30:19 compute-0 systemd[1]: Started Process Core Dump (PID 252515/UID 0).
Feb 02 11:30:19 compute-0 ceph-mon[74676]: pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:20 compute-0 systemd-coredump[252516]: Process 208241 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 71:
                                                    #0  0x00007f04b34b732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Feb 02 11:30:20 compute-0 systemd[1]: systemd-coredump@4-252515-0.service: Deactivated successfully.
Feb 02 11:30:20 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:30:20 compute-0 podman[252523]: 2026-02-02 11:30:20.32128419 +0000 UTC m=+0.030265674 container died 09770d2aca4e2956b43b09f6ef9373e46a77f2ba1e7a0de8aca5e3173880959b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 11:30:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d7c59c2e9a39a789efb873eaa53c6595448f35922aec8206d254cc1c2241308-merged.mount: Deactivated successfully.
Feb 02 11:30:20 compute-0 ceph-mon[74676]: pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:20 compute-0 podman[252523]: 2026-02-02 11:30:20.547029055 +0000 UTC m=+0.256010509 container remove 09770d2aca4e2956b43b09f6ef9373e46a77f2ba1e7a0de8aca5e3173880959b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:30:20 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Main process exited, code=exited, status=139/n/a
Feb 02 11:30:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:20.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:20.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:20 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Failed with result 'exit-code'.
Feb 02 11:30:20 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.728s CPU time.
Feb 02 11:30:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:30:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:22.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:30:22.663 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:30:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:30:22.664 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:30:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:30:22.664 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:30:23 compute-0 ceph-mon[74676]: pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:30:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:24 compute-0 sudo[252569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:30:24 compute-0 sudo[252569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:24 compute-0 sudo[252569]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:24.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:24.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.023 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.024 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.024 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.024 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.115 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.116 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.116 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.116 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.117 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.117 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.117 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.117 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.118 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.208 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.209 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.209 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.209 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.210 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:30:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113025 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:30:25 compute-0 ceph-mon[74676]: pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:25 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2408707697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:30:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:30:25 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174035450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.746 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.968 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.969 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4932MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.970 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:30:25 compute-0 nova_compute[251290]: 2026-02-02 11:30:25.970 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:30:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:26 compute-0 nova_compute[251290]: 2026-02-02 11:30:26.141 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:30:26 compute-0 nova_compute[251290]: 2026-02-02 11:30:26.142 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:30:26 compute-0 nova_compute[251290]: 2026-02-02 11:30:26.187 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:30:26 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4174035450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:30:26 compute-0 ceph-mon[74676]: pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:30:26 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3103360043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:30:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:26.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:26.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:30:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3564822976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:30:26 compute-0 nova_compute[251290]: 2026-02-02 11:30:26.708 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:30:26 compute-0 nova_compute[251290]: 2026-02-02 11:30:26.715 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:30:26 compute-0 nova_compute[251290]: 2026-02-02 11:30:26.795 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:30:26 compute-0 nova_compute[251290]: 2026-02-02 11:30:26.797 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:30:26 compute-0 nova_compute[251290]: 2026-02-02 11:30:26.798 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:30:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:30:27.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:30:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:27] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:30:27 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:27] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:30:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3564822976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:30:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1174913965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:30:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:30:28 compute-0 sudo[252642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:30:28 compute-0 sudo[252642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:28 compute-0 sudo[252642]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:28 compute-0 sudo[252679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:30:28 compute-0 sudo[252679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:28 compute-0 ceph-mon[74676]: pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:30:28 compute-0 podman[252666]: 2026-02-02 11:30:28.593282609 +0000 UTC m=+0.099499643 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 02 11:30:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:28.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:28 compute-0 podman[252667]: 2026-02-02 11:30:28.619963097 +0000 UTC m=+0.124107161 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb 02 11:30:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:28.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:29 compute-0 sudo[252679]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:30:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:30:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:30:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:30:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:30:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:30:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:30:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:30:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:30:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:30:29 compute-0 sudo[252762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:30:29 compute-0 sudo[252762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:29 compute-0 sudo[252762]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:29 compute-0 sudo[252787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:30:29 compute-0 sudo[252787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:30:29
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms', '.rgw.root', 'images', 'volumes', '.nfs', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control']
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:30:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/202988901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:30:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:30:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:30:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:30:29 compute-0 podman[252853]: 2026-02-02 11:30:29.676011592 +0000 UTC m=+0.046754395 container create 2f8bbcab6710846d99d8d8d602d22d103be7d3de16a6df3d53f2847b1f3986ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb 02 11:30:29 compute-0 systemd[1]: Started libpod-conmon-2f8bbcab6710846d99d8d8d602d22d103be7d3de16a6df3d53f2847b1f3986ae.scope.
Feb 02 11:30:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:30:29 compute-0 podman[252853]: 2026-02-02 11:30:29.654805153 +0000 UTC m=+0.025547986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:30:29 compute-0 podman[252853]: 2026-02-02 11:30:29.767376437 +0000 UTC m=+0.138119260 container init 2f8bbcab6710846d99d8d8d602d22d103be7d3de16a6df3d53f2847b1f3986ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nobel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb 02 11:30:29 compute-0 podman[252853]: 2026-02-02 11:30:29.776584695 +0000 UTC m=+0.147327498 container start 2f8bbcab6710846d99d8d8d602d22d103be7d3de16a6df3d53f2847b1f3986ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:30:29 compute-0 podman[252853]: 2026-02-02 11:30:29.781707485 +0000 UTC m=+0.152450328 container attach 2f8bbcab6710846d99d8d8d602d22d103be7d3de16a6df3d53f2847b1f3986ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nobel, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:30:29 compute-0 lucid_nobel[252869]: 167 167
Feb 02 11:30:29 compute-0 systemd[1]: libpod-2f8bbcab6710846d99d8d8d602d22d103be7d3de16a6df3d53f2847b1f3986ae.scope: Deactivated successfully.
Feb 02 11:30:29 compute-0 podman[252853]: 2026-02-02 11:30:29.786936447 +0000 UTC m=+0.157679250 container died 2f8bbcab6710846d99d8d8d602d22d103be7d3de16a6df3d53f2847b1f3986ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:30:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-be5c2982f16f65b5e1f91a15af9bbd14005c350829194ab71f6a975dfb777050-merged.mount: Deactivated successfully.
Feb 02 11:30:29 compute-0 podman[252853]: 2026-02-02 11:30:29.839702636 +0000 UTC m=+0.210445439 container remove 2f8bbcab6710846d99d8d8d602d22d103be7d3de16a6df3d53f2847b1f3986ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nobel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:30:29 compute-0 systemd[1]: libpod-conmon-2f8bbcab6710846d99d8d8d602d22d103be7d3de16a6df3d53f2847b1f3986ae.scope: Deactivated successfully.
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:30:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:30:29 compute-0 podman[252892]: 2026-02-02 11:30:29.99100507 +0000 UTC m=+0.057453487 container create f9814ba414aa8e8981adb3c48cb2874d863059d6f04b3ac6650667fe8db9f376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pascal, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:30:30 compute-0 systemd[1]: Started libpod-conmon-f9814ba414aa8e8981adb3c48cb2874d863059d6f04b3ac6650667fe8db9f376.scope.
Feb 02 11:30:30 compute-0 podman[252892]: 2026-02-02 11:30:29.961103108 +0000 UTC m=+0.027551545 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:30:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450ae4d9683b7bffce391d23593d4c5136cc23ba3cebcb9e4d9b3f1fc63837ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450ae4d9683b7bffce391d23593d4c5136cc23ba3cebcb9e4d9b3f1fc63837ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450ae4d9683b7bffce391d23593d4c5136cc23ba3cebcb9e4d9b3f1fc63837ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450ae4d9683b7bffce391d23593d4c5136cc23ba3cebcb9e4d9b3f1fc63837ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450ae4d9683b7bffce391d23593d4c5136cc23ba3cebcb9e4d9b3f1fc63837ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:30 compute-0 podman[252892]: 2026-02-02 11:30:30.100612377 +0000 UTC m=+0.167060824 container init f9814ba414aa8e8981adb3c48cb2874d863059d6f04b3ac6650667fe8db9f376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pascal, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:30:30 compute-0 podman[252892]: 2026-02-02 11:30:30.107131287 +0000 UTC m=+0.173579704 container start f9814ba414aa8e8981adb3c48cb2874d863059d6f04b3ac6650667fe8db9f376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pascal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:30:30 compute-0 podman[252892]: 2026-02-02 11:30:30.111537116 +0000 UTC m=+0.177985563 container attach f9814ba414aa8e8981adb3c48cb2874d863059d6f04b3ac6650667fe8db9f376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:30:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:30:30 compute-0 frosty_pascal[252908]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:30:30 compute-0 frosty_pascal[252908]: --> All data devices are unavailable
Feb 02 11:30:30 compute-0 systemd[1]: libpod-f9814ba414aa8e8981adb3c48cb2874d863059d6f04b3ac6650667fe8db9f376.scope: Deactivated successfully.
Feb 02 11:30:30 compute-0 podman[252892]: 2026-02-02 11:30:30.459363071 +0000 UTC m=+0.525811498 container died f9814ba414aa8e8981adb3c48cb2874d863059d6f04b3ac6650667fe8db9f376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:30:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-450ae4d9683b7bffce391d23593d4c5136cc23ba3cebcb9e4d9b3f1fc63837ec-merged.mount: Deactivated successfully.
Feb 02 11:30:30 compute-0 podman[252892]: 2026-02-02 11:30:30.501662585 +0000 UTC m=+0.568111002 container remove f9814ba414aa8e8981adb3c48cb2874d863059d6f04b3ac6650667fe8db9f376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:30:30 compute-0 systemd[1]: libpod-conmon-f9814ba414aa8e8981adb3c48cb2874d863059d6f04b3ac6650667fe8db9f376.scope: Deactivated successfully.
Feb 02 11:30:30 compute-0 sudo[252787]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:30:30 compute-0 ceph-mon[74676]: pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:30:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:30.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:30 compute-0 sudo[252934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:30:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:30 compute-0 sudo[252934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:30 compute-0 sudo[252934]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:30 compute-0 sudo[252959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:30:30 compute-0 sudo[252959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:30 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Scheduled restart job, restart counter is at 5.
Feb 02 11:30:30 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:30:30 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.728s CPU time.
Feb 02 11:30:30 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:30:31 compute-0 podman[253040]: 2026-02-02 11:30:31.089753739 +0000 UTC m=+0.043955473 container create 89571e28738504d94bb9a81b42db8b3b4faa6504bc9f97763ba4e85e015399c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:30:31 compute-0 systemd[1]: Started libpod-conmon-89571e28738504d94bb9a81b42db8b3b4faa6504bc9f97763ba4e85e015399c3.scope.
Feb 02 11:30:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:30:31 compute-0 podman[253040]: 2026-02-02 11:30:31.072452364 +0000 UTC m=+0.026654118 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:30:31 compute-0 podman[253040]: 2026-02-02 11:30:31.185178563 +0000 UTC m=+0.139380317 container init 89571e28738504d94bb9a81b42db8b3b4faa6504bc9f97763ba4e85e015399c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:30:31 compute-0 podman[253040]: 2026-02-02 11:30:31.193033272 +0000 UTC m=+0.147235016 container start 89571e28738504d94bb9a81b42db8b3b4faa6504bc9f97763ba4e85e015399c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:30:31 compute-0 podman[253084]: 2026-02-02 11:30:31.192861137 +0000 UTC m=+0.055687636 container create d3736d9117cfbda266d3d17795e913bb2a0a3e179a7ee56c2f1a13bad0587f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:30:31 compute-0 podman[253040]: 2026-02-02 11:30:31.197188053 +0000 UTC m=+0.151389777 container attach 89571e28738504d94bb9a81b42db8b3b4faa6504bc9f97763ba4e85e015399c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Feb 02 11:30:31 compute-0 jovial_payne[253090]: 167 167
Feb 02 11:30:31 compute-0 systemd[1]: libpod-89571e28738504d94bb9a81b42db8b3b4faa6504bc9f97763ba4e85e015399c3.scope: Deactivated successfully.
Feb 02 11:30:31 compute-0 podman[253040]: 2026-02-02 11:30:31.200724106 +0000 UTC m=+0.154925850 container died 89571e28738504d94bb9a81b42db8b3b4faa6504bc9f97763ba4e85e015399c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_payne, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:30:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cc10799ffaf0a7a99331d906b125e638380ad5b3a791d17ec71fc70b9ffbef5-merged.mount: Deactivated successfully.
Feb 02 11:30:31 compute-0 podman[253040]: 2026-02-02 11:30:31.251520188 +0000 UTC m=+0.205721922 container remove 89571e28738504d94bb9a81b42db8b3b4faa6504bc9f97763ba4e85e015399c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_payne, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:30:31 compute-0 podman[253084]: 2026-02-02 11:30:31.163065308 +0000 UTC m=+0.025891807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:30:31 compute-0 systemd[1]: libpod-conmon-89571e28738504d94bb9a81b42db8b3b4faa6504bc9f97763ba4e85e015399c3.scope: Deactivated successfully.
Feb 02 11:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf3c62b9d66f2c8d80744924ba1bb7f3493e817614fe979b5144af36e74c1fa/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf3c62b9d66f2c8d80744924ba1bb7f3493e817614fe979b5144af36e74c1fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf3c62b9d66f2c8d80744924ba1bb7f3493e817614fe979b5144af36e74c1fa/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf3c62b9d66f2c8d80744924ba1bb7f3493e817614fe979b5144af36e74c1fa/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.lrvhze-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:31 compute-0 podman[253084]: 2026-02-02 11:30:31.28655207 +0000 UTC m=+0.149378649 container init d3736d9117cfbda266d3d17795e913bb2a0a3e179a7ee56c2f1a13bad0587f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Feb 02 11:30:31 compute-0 podman[253084]: 2026-02-02 11:30:31.290992349 +0000 UTC m=+0.153818848 container start d3736d9117cfbda266d3d17795e913bb2a0a3e179a7ee56c2f1a13bad0587f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:30:31 compute-0 bash[253084]: d3736d9117cfbda266d3d17795e913bb2a0a3e179a7ee56c2f1a13bad0587f7d
Feb 02 11:30:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:31 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb 02 11:30:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:31 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb 02 11:30:31 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:30:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:31 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb 02 11:30:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:31 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb 02 11:30:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:31 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb 02 11:30:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:31 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb 02 11:30:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:31 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb 02 11:30:31 compute-0 podman[253142]: 2026-02-02 11:30:31.411231026 +0000 UTC m=+0.049024861 container create 7938f994ebdf4f1c7f2686737fb82cc9b9fbd80f953fbaccdcfff904815f2e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swanson, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:30:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:31 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:30:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:31 compute-0 systemd[1]: Started libpod-conmon-7938f994ebdf4f1c7f2686737fb82cc9b9fbd80f953fbaccdcfff904815f2e84.scope.
Feb 02 11:30:31 compute-0 podman[253142]: 2026-02-02 11:30:31.38803351 +0000 UTC m=+0.025827375 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:30:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca6c11bcb1228b542c26172b1c761c931336c0342f183781f0b0d32cf9bbb73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca6c11bcb1228b542c26172b1c761c931336c0342f183781f0b0d32cf9bbb73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca6c11bcb1228b542c26172b1c761c931336c0342f183781f0b0d32cf9bbb73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca6c11bcb1228b542c26172b1c761c931336c0342f183781f0b0d32cf9bbb73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:31 compute-0 podman[253142]: 2026-02-02 11:30:31.500915232 +0000 UTC m=+0.138709077 container init 7938f994ebdf4f1c7f2686737fb82cc9b9fbd80f953fbaccdcfff904815f2e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:30:31 compute-0 podman[253142]: 2026-02-02 11:30:31.508400641 +0000 UTC m=+0.146194476 container start 7938f994ebdf4f1c7f2686737fb82cc9b9fbd80f953fbaccdcfff904815f2e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:30:31 compute-0 podman[253142]: 2026-02-02 11:30:31.512928893 +0000 UTC m=+0.150722758 container attach 7938f994ebdf4f1c7f2686737fb82cc9b9fbd80f953fbaccdcfff904815f2e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swanson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]: {
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:     "1": [
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:         {
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "devices": [
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "/dev/loop3"
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             ],
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "lv_name": "ceph_lv0",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "lv_size": "21470642176",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "name": "ceph_lv0",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "tags": {
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.cluster_name": "ceph",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.crush_device_class": "",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.encrypted": "0",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.osd_id": "1",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.type": "block",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.vdo": "0",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:                 "ceph.with_tpm": "0"
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             },
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "type": "block",
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:             "vg_name": "ceph_vg0"
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:         }
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]:     ]
Feb 02 11:30:31 compute-0 pedantic_swanson[253180]: }
Feb 02 11:30:31 compute-0 systemd[1]: libpod-7938f994ebdf4f1c7f2686737fb82cc9b9fbd80f953fbaccdcfff904815f2e84.scope: Deactivated successfully.
Feb 02 11:30:31 compute-0 conmon[253180]: conmon 7938f994ebdf4f1c7f26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7938f994ebdf4f1c7f2686737fb82cc9b9fbd80f953fbaccdcfff904815f2e84.scope/container/memory.events
Feb 02 11:30:31 compute-0 podman[253142]: 2026-02-02 11:30:31.847468391 +0000 UTC m=+0.485262226 container died 7938f994ebdf4f1c7f2686737fb82cc9b9fbd80f953fbaccdcfff904815f2e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swanson, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:30:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ca6c11bcb1228b542c26172b1c761c931336c0342f183781f0b0d32cf9bbb73-merged.mount: Deactivated successfully.
Feb 02 11:30:31 compute-0 podman[253142]: 2026-02-02 11:30:31.903853196 +0000 UTC m=+0.541647031 container remove 7938f994ebdf4f1c7f2686737fb82cc9b9fbd80f953fbaccdcfff904815f2e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:30:31 compute-0 systemd[1]: libpod-conmon-7938f994ebdf4f1c7f2686737fb82cc9b9fbd80f953fbaccdcfff904815f2e84.scope: Deactivated successfully.
Feb 02 11:30:31 compute-0 sudo[252959]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:32 compute-0 sudo[253204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:30:32 compute-0 sudo[253204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:32 compute-0 sudo[253204]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:32 compute-0 sudo[253229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:30:32 compute-0 sudo[253229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:30:32 compute-0 podman[253294]: 2026-02-02 11:30:32.497004808 +0000 UTC m=+0.043593673 container create 692cc8d977f2dc4f6e3eb9c4a1f5b701a2e269daebbd2cefc0ef8a3957e9dfab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:30:32 compute-0 systemd[1]: Started libpod-conmon-692cc8d977f2dc4f6e3eb9c4a1f5b701a2e269daebbd2cefc0ef8a3957e9dfab.scope.
Feb 02 11:30:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:30:32 compute-0 podman[253294]: 2026-02-02 11:30:32.476729267 +0000 UTC m=+0.023318152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:30:32 compute-0 podman[253294]: 2026-02-02 11:30:32.574662643 +0000 UTC m=+0.121251538 container init 692cc8d977f2dc4f6e3eb9c4a1f5b701a2e269daebbd2cefc0ef8a3957e9dfab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:30:32 compute-0 podman[253294]: 2026-02-02 11:30:32.580940726 +0000 UTC m=+0.127529601 container start 692cc8d977f2dc4f6e3eb9c4a1f5b701a2e269daebbd2cefc0ef8a3957e9dfab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:30:32 compute-0 podman[253294]: 2026-02-02 11:30:32.584634064 +0000 UTC m=+0.131222959 container attach 692cc8d977f2dc4f6e3eb9c4a1f5b701a2e269daebbd2cefc0ef8a3957e9dfab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:30:32 compute-0 hopeful_nightingale[253310]: 167 167
Feb 02 11:30:32 compute-0 systemd[1]: libpod-692cc8d977f2dc4f6e3eb9c4a1f5b701a2e269daebbd2cefc0ef8a3957e9dfab.scope: Deactivated successfully.
Feb 02 11:30:32 compute-0 podman[253294]: 2026-02-02 11:30:32.587613371 +0000 UTC m=+0.134202226 container died 692cc8d977f2dc4f6e3eb9c4a1f5b701a2e269daebbd2cefc0ef8a3957e9dfab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:30:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-61eb0f87c9bdcde37c5808c9c7991f2449e5923455a130503c597c9a651bfefc-merged.mount: Deactivated successfully.
Feb 02 11:30:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:32.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:32.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:32 compute-0 podman[253294]: 2026-02-02 11:30:32.625694442 +0000 UTC m=+0.172283307 container remove 692cc8d977f2dc4f6e3eb9c4a1f5b701a2e269daebbd2cefc0ef8a3957e9dfab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:30:32 compute-0 systemd[1]: libpod-conmon-692cc8d977f2dc4f6e3eb9c4a1f5b701a2e269daebbd2cefc0ef8a3957e9dfab.scope: Deactivated successfully.
Feb 02 11:30:32 compute-0 podman[253335]: 2026-02-02 11:30:32.810885934 +0000 UTC m=+0.081020265 container create 55ceedcf0feabed9765a14d3308138a7f87a24a7d692eca271a50fd50740614e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bartik, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:30:32 compute-0 systemd[1]: Started libpod-conmon-55ceedcf0feabed9765a14d3308138a7f87a24a7d692eca271a50fd50740614e.scope.
Feb 02 11:30:32 compute-0 podman[253335]: 2026-02-02 11:30:32.757021163 +0000 UTC m=+0.027155484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:30:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:30:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e71ec6c218f74883727f3276d3ae6a45cf6cc444c4d89a9770136daf7b58b73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e71ec6c218f74883727f3276d3ae6a45cf6cc444c4d89a9770136daf7b58b73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e71ec6c218f74883727f3276d3ae6a45cf6cc444c4d89a9770136daf7b58b73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e71ec6c218f74883727f3276d3ae6a45cf6cc444c4d89a9770136daf7b58b73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:30:32 compute-0 podman[253335]: 2026-02-02 11:30:32.895885593 +0000 UTC m=+0.166019924 container init 55ceedcf0feabed9765a14d3308138a7f87a24a7d692eca271a50fd50740614e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bartik, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:30:32 compute-0 podman[253335]: 2026-02-02 11:30:32.901913579 +0000 UTC m=+0.172047900 container start 55ceedcf0feabed9765a14d3308138a7f87a24a7d692eca271a50fd50740614e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bartik, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:30:32 compute-0 podman[253335]: 2026-02-02 11:30:32.904793003 +0000 UTC m=+0.174927354 container attach 55ceedcf0feabed9765a14d3308138a7f87a24a7d692eca271a50fd50740614e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:30:33 compute-0 ceph-mon[74676]: pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:30:33 compute-0 lvm[253427]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:30:33 compute-0 lvm[253427]: VG ceph_vg0 finished
Feb 02 11:30:33 compute-0 kind_bartik[253352]: {}
Feb 02 11:30:33 compute-0 systemd[1]: libpod-55ceedcf0feabed9765a14d3308138a7f87a24a7d692eca271a50fd50740614e.scope: Deactivated successfully.
Feb 02 11:30:33 compute-0 systemd[1]: libpod-55ceedcf0feabed9765a14d3308138a7f87a24a7d692eca271a50fd50740614e.scope: Consumed 1.118s CPU time.
Feb 02 11:30:33 compute-0 podman[253335]: 2026-02-02 11:30:33.654388869 +0000 UTC m=+0.924523210 container died 55ceedcf0feabed9765a14d3308138a7f87a24a7d692eca271a50fd50740614e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bartik, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb 02 11:30:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e71ec6c218f74883727f3276d3ae6a45cf6cc444c4d89a9770136daf7b58b73-merged.mount: Deactivated successfully.
Feb 02 11:30:33 compute-0 podman[253335]: 2026-02-02 11:30:33.698844155 +0000 UTC m=+0.968978476 container remove 55ceedcf0feabed9765a14d3308138a7f87a24a7d692eca271a50fd50740614e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:30:33 compute-0 systemd[1]: libpod-conmon-55ceedcf0feabed9765a14d3308138a7f87a24a7d692eca271a50fd50740614e.scope: Deactivated successfully.
Feb 02 11:30:33 compute-0 sudo[253229]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:30:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:30:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:30:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:30:33 compute-0 sudo[253445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:30:33 compute-0 sudo[253445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:33 compute-0 sudo[253445]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113034 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:30:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:30:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:34.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:30:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:30:34 compute-0 ceph-mon[74676]: pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:30:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:30:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:36.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:36.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Feb 02 11:30:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:36] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:30:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:36] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:30:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:30:37.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:30:37 compute-0 ceph-mon[74676]: pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:30:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:37 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:30:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:37 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:30:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:37 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:30:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:30:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:38.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:30:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:38 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:38.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:39 compute-0 ceph-mon[74676]: pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:30:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:39 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:30:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:39 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:30:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:39 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:30:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:30:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:40.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:30:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:40.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:30:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:41 compute-0 ceph-mon[74676]: pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:30:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:30:42 compute-0 ceph-mon[74676]: pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:30:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:42.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:42.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:30:44 compute-0 sudo[253480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:30:44 compute-0 sudo[253480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:30:44 compute-0 sudo[253480]: pam_unix(sudo:session): session closed for user root
Feb 02 11:30:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:30:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:30:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:30:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:44 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:44.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.003000087s ======
Feb 02 11:30:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:44.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000087s
Feb 02 11:30:45 compute-0 ceph-mon[74676]: pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:30:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:30:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:45 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7040000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:46 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7038001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.4 KiB/s wr, 5 op/s
Feb 02 11:30:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:46.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:30:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:46 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:46.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:46] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:30:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:46] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:30:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:30:47.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:30:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:47 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7028000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:47 compute-0 ceph-mon[74676]: pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.4 KiB/s wr, 5 op/s
Feb 02 11:30:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:47 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f702c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:48 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7038001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Feb 02 11:30:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:48 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:30:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:48 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:30:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:48.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:48.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113049 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:30:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:49 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7044001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:30:49 compute-0 ceph-mon[74676]: pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Feb 02 11:30:49 compute-0 kernel: ganesha.nfsd[253522]: segfault at 50 ip 00007f70d1d6b32e sp 00007f705e7fb210 error 4 in libntirpc.so.5.8[7f70d1d50000+2c000] likely on CPU 2 (core 0, socket 2)
Feb 02 11:30:49 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb 02 11:30:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253115]: 02/02/2026 11:30:49 : epoch 69808ad7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70280016a0 fd 39 proxy ignored for local
Feb 02 11:30:49 compute-0 systemd[1]: Started Process Core Dump (PID 253527/UID 0).
Feb 02 11:30:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Feb 02 11:30:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:50.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:50.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:50 compute-0 systemd-coredump[253528]: Process 253121 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007f70d1d6b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Feb 02 11:30:50 compute-0 systemd[1]: systemd-coredump@5-253527-0.service: Deactivated successfully.
Feb 02 11:30:50 compute-0 podman[253534]: 2026-02-02 11:30:50.903806923 +0000 UTC m=+0.028899204 container died d3736d9117cfbda266d3d17795e913bb2a0a3e179a7ee56c2f1a13bad0587f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-edf3c62b9d66f2c8d80744924ba1bb7f3493e817614fe979b5144af36e74c1fa-merged.mount: Deactivated successfully.
Feb 02 11:30:50 compute-0 podman[253534]: 2026-02-02 11:30:50.94928753 +0000 UTC m=+0.074379801 container remove d3736d9117cfbda266d3d17795e913bb2a0a3e179a7ee56c2f1a13bad0587f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:30:50 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Main process exited, code=exited, status=139/n/a
Feb 02 11:30:51 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Failed with result 'exit-code'.
Feb 02 11:30:51 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.029s CPU time.
Feb 02 11:30:51 compute-0 ceph-mon[74676]: pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Feb 02 11:30:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Feb 02 11:30:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:52.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:30:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:52.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:30:53 compute-0 ceph-mon[74676]: pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Feb 02 11:30:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113054 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:30:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:30:54 compute-0 ceph-mon[74676]: pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:30:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:54.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:54.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113055 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:30:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:30:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:30:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:30:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:56.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:30:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:56.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:56] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:30:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:30:56] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:30:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:30:57.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:30:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:30:57.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:30:57 compute-0 ceph-mon[74676]: pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:30:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:30:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:30:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:30:58.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:30:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:30:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:30:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:30:58.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:30:59 compute-0 ceph-mon[74676]: pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:30:59 compute-0 podman[253585]: 2026-02-02 11:30:59.268858054 +0000 UTC m=+0.059141487 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Feb 02 11:30:59 compute-0 podman[253586]: 2026-02-02 11:30:59.296638754 +0000 UTC m=+0.085254238 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 02 11:30:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:30:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:30:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:30:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:30:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:30:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:30:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:30:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:31:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:31:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:31:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:00.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:00.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:01 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Scheduled restart job, restart counter is at 6.
Feb 02 11:31:01 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:31:01 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.029s CPU time.
Feb 02 11:31:01 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:31:01 compute-0 ceph-mon[74676]: pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:31:01 compute-0 podman[253681]: 2026-02-02 11:31:01.394669417 +0000 UTC m=+0.041886122 container create 16cb242c4f8a41bb2fd157ce2f5f7229088e534c725300c95895c8054494e1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:31:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd1e59e7a7c7fe196da01a9ed9b356cd5efd123ce0e100ad60d28007e5bfa93/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd1e59e7a7c7fe196da01a9ed9b356cd5efd123ce0e100ad60d28007e5bfa93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd1e59e7a7c7fe196da01a9ed9b356cd5efd123ce0e100ad60d28007e5bfa93/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd1e59e7a7c7fe196da01a9ed9b356cd5efd123ce0e100ad60d28007e5bfa93/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.lrvhze-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:01 compute-0 podman[253681]: 2026-02-02 11:31:01.451229511 +0000 UTC m=+0.098446236 container init 16cb242c4f8a41bb2fd157ce2f5f7229088e534c725300c95895c8054494e1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:31:01 compute-0 podman[253681]: 2026-02-02 11:31:01.457464991 +0000 UTC m=+0.104681696 container start 16cb242c4f8a41bb2fd157ce2f5f7229088e534c725300c95895c8054494e1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:31:01 compute-0 bash[253681]: 16cb242c4f8a41bb2fd157ce2f5f7229088e534c725300c95895c8054494e1d4
Feb 02 11:31:01 compute-0 podman[253681]: 2026-02-02 11:31:01.37609216 +0000 UTC m=+0.023308885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:31:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:01 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb 02 11:31:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:01 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb 02 11:31:01 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:31:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:01 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb 02 11:31:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:01 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb 02 11:31:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:01 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb 02 11:31:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:01 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb 02 11:31:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:01 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb 02 11:31:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:01 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:31:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:31:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:02.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:02.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:03 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.24731 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 11:31:03 compute-0 ceph-mgr[74969]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 11:31:03 compute-0 ceph-mgr[74969]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 11:31:03 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.24682 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 11:31:03 compute-0 ceph-mgr[74969]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 11:31:03 compute-0 ceph-mgr[74969]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 11:31:03 compute-0 ceph-mon[74676]: pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Feb 02 11:31:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3842909641' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Feb 02 11:31:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/4206856929' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Feb 02 11:31:03 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.24682 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Feb 02 11:31:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:04 compute-0 ceph-mon[74676]: from='client.24731 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 11:31:04 compute-0 ceph-mon[74676]: from='client.24682 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 11:31:04 compute-0 ceph-mon[74676]: from='client.24682 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Feb 02 11:31:04 compute-0 sudo[253741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:31:04 compute-0 sudo[253741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:04 compute-0 sudo[253741]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:04.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:04.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:05 compute-0 ceph-mon[74676]: pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:31:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:06 compute-0 ceph-mon[74676]: pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:31:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:06.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:06.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:06] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Feb 02 11:31:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:06] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Feb 02 11:31:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:31:07.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:31:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:07 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:31:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:07 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:31:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Feb 02 11:31:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:08.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:08.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:09 compute-0 ceph-mon[74676]: pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Feb 02 11:31:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:31:10 compute-0 ceph-mon[74676]: pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:31:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:10.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:10.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:12.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:12.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:13 compute-0 ceph-mon[74676]: pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:31:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:13 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff100000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:14 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f40016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:31:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:31:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:14.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:31:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:14.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:31:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:15 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:15 compute-0 ceph-mon[74676]: pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:31:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:15 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:16 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:16.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:16.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:16] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:31:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:16] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:31:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:31:17.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:31:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113117 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:31:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:17 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:17 compute-0 ceph-mon[74676]: pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:17 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:18 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:31:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:18.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:18.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:19 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f80023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:19 compute-0 ceph-mon[74676]: pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:31:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:19 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113120 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:31:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:20 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:31:20 compute-0 ceph-mon[74676]: pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:31:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:20.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:21 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:31:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6868 writes, 27K keys, 6868 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6868 writes, 1340 syncs, 5.13 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 522 writes, 837 keys, 522 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 522 writes, 250 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 11:31:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:21 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f80023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:22 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:31:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:31:22.665 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:31:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:31:22.665 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:31:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:31:22.666 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:31:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:22.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:22.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:23 compute-0 ceph-mon[74676]: pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:31:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:23 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:23 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:24 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f80023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:24 compute-0 sudo[253802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:31:24 compute-0 sudo[253802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:24 compute-0 sudo[253802]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:24.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:24.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:25 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:25 compute-0 ceph-mon[74676]: pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:25 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1101696040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:31:25 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2591360642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:31:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:25 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:26 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:26 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.24761 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 11:31:26 compute-0 ceph-mgr[74969]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 11:31:26 compute-0 ceph-mgr[74969]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 11:31:26 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1468374030' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.307326) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031886307377, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 948, "num_deletes": 251, "total_data_size": 1590301, "memory_usage": 1611168, "flush_reason": "Manual Compaction"}
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Feb 02 11:31:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb 02 11:31:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3909201137' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Feb 02 11:31:26 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.15063 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 11:31:26 compute-0 ceph-mgr[74969]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 11:31:26 compute-0 ceph-mgr[74969]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031886323678, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1575895, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19026, "largest_seqno": 19973, "table_properties": {"data_size": 1571199, "index_size": 2284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10324, "raw_average_key_size": 19, "raw_value_size": 1561804, "raw_average_value_size": 2974, "num_data_blocks": 102, "num_entries": 525, "num_filter_entries": 525, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031802, "oldest_key_time": 1770031802, "file_creation_time": 1770031886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 16468 microseconds, and 3774 cpu microseconds.
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.323798) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1575895 bytes OK
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.323823) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.326542) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.326578) EVENT_LOG_v1 {"time_micros": 1770031886326569, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.326602) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1585802, prev total WAL file size 1585802, number of live WAL files 2.
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.327320) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1538KB)], [41(12MB)]
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031886327439, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15091452, "oldest_snapshot_seqno": -1}
Feb 02 11:31:26 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.15063 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Feb 02 11:31:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4988 keys, 12934451 bytes, temperature: kUnknown
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031886492966, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 12934451, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12900344, "index_size": 20557, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 127343, "raw_average_key_size": 25, "raw_value_size": 12809005, "raw_average_value_size": 2567, "num_data_blocks": 842, "num_entries": 4988, "num_filter_entries": 4988, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770031886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.493333) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12934451 bytes
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.494533) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 91.1 rd, 78.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 12.9 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(17.8) write-amplify(8.2) OK, records in: 5504, records dropped: 516 output_compression: NoCompression
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.494552) EVENT_LOG_v1 {"time_micros": 1770031886494542, "job": 20, "event": "compaction_finished", "compaction_time_micros": 165614, "compaction_time_cpu_micros": 26248, "output_level": 6, "num_output_files": 1, "total_output_size": 12934451, "num_input_records": 5504, "num_output_records": 4988, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031886494851, "job": 20, "event": "table_file_deletion", "file_number": 43}
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031886496844, "job": 20, "event": "table_file_deletion", "file_number": 41}
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.327154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.496922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.496927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.496930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.496931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:31:26 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:31:26.496933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:31:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:26.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:26 compute-0 nova_compute[251290]: 2026-02-02 11:31:26.787 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:31:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:26] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:31:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:26] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.067 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.067 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.067 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:31:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:31:27.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:31:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:31:27.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.099 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.099 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.100 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.100 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.100 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.100 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.101 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.101 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.101 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:31:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:27 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:27 compute-0 ceph-mon[74676]: pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:27 compute-0 ceph-mon[74676]: from='client.24761 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 11:31:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3909201137' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Feb 02 11:31:27 compute-0 ceph-mon[74676]: from='client.15063 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 11:31:27 compute-0 ceph-mon[74676]: from='client.15063 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.704 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.705 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.705 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.705 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:31:27 compute-0 nova_compute[251290]: 2026-02-02 11:31:27.705 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:31:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:27 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:28 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:31:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:31:28 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274136813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.225 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.363 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.364 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4897MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.365 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.365 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.437 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.438 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.459 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:31:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:28.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:28.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:28 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/948549566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:31:28 compute-0 ceph-mon[74676]: pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:31:28 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3274136813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:31:28 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4281381352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:31:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:28 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:31:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:31:28 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437462513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.956 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.962 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.994 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.996 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:31:28 compute-0 nova_compute[251290]: 2026-02-02 11:31:28.997 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:31:29 compute-0 nova_compute[251290]: 2026-02-02 11:31:29.222 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:31:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:29 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:31:29
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.nfs']
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:31:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:31:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:31:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/437462513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:31:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:31:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:29 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f80034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:31:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:31:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:30 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:31:30 compute-0 podman[253877]: 2026-02-02 11:31:30.267702114 +0000 UTC m=+0.057411810 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Feb 02 11:31:30 compute-0 podman[253878]: 2026-02-02 11:31:30.295764625 +0000 UTC m=+0.084705379 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:31:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:30.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:30.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:30 compute-0 ceph-mon[74676]: pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:31:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:31 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:31 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:31 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:31:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:31 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:31:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:32 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f80034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:31:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:32.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:32.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:33 compute-0 ceph-mon[74676]: pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:31:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:33 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d8003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:33 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:34 compute-0 sudo[253926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:31:34 compute-0 sudo[253926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:34 compute-0 sudo[253926]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:34 compute-0 sudo[253951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:31:34 compute-0 sudo[253951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:31:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:34 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:34 compute-0 sudo[253951]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:31:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:34.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:31:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:31:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:31:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:31:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:34.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:31:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:31:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:31:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:31:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:31:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:31:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:31:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:31:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:31:34 compute-0 sudo[254006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:31:34 compute-0 sudo[254006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:34 compute-0 sudo[254006]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:34 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:31:34 compute-0 sudo[254031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:31:34 compute-0 sudo[254031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:35 compute-0 ceph-mon[74676]: pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:31:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:31:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:31:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:31:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:31:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:31:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:31:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:31:35 compute-0 podman[254100]: 2026-02-02 11:31:35.252124405 +0000 UTC m=+0.044235690 container create aa700083572191f8880cfda4b8a5e3e02edb4621a25f62ba1cbdcc4365967990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:31:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:35 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f80034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:35 compute-0 systemd[1]: Started libpod-conmon-aa700083572191f8880cfda4b8a5e3e02edb4621a25f62ba1cbdcc4365967990.scope.
Feb 02 11:31:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:31:35 compute-0 podman[254100]: 2026-02-02 11:31:35.232338403 +0000 UTC m=+0.024449698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:31:35 compute-0 podman[254100]: 2026-02-02 11:31:35.342543337 +0000 UTC m=+0.134654642 container init aa700083572191f8880cfda4b8a5e3e02edb4621a25f62ba1cbdcc4365967990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:31:35 compute-0 podman[254100]: 2026-02-02 11:31:35.349805807 +0000 UTC m=+0.141917092 container start aa700083572191f8880cfda4b8a5e3e02edb4621a25f62ba1cbdcc4365967990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wu, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:31:35 compute-0 podman[254100]: 2026-02-02 11:31:35.353863954 +0000 UTC m=+0.145975229 container attach aa700083572191f8880cfda4b8a5e3e02edb4621a25f62ba1cbdcc4365967990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:31:35 compute-0 nervous_wu[254117]: 167 167
Feb 02 11:31:35 compute-0 systemd[1]: libpod-aa700083572191f8880cfda4b8a5e3e02edb4621a25f62ba1cbdcc4365967990.scope: Deactivated successfully.
Feb 02 11:31:35 compute-0 podman[254100]: 2026-02-02 11:31:35.357593472 +0000 UTC m=+0.149704767 container died aa700083572191f8880cfda4b8a5e3e02edb4621a25f62ba1cbdcc4365967990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wu, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c53ab331049d57cad6556014356ef315cab8be9924f029d5c58e4e55cb9218e6-merged.mount: Deactivated successfully.
Feb 02 11:31:35 compute-0 podman[254100]: 2026-02-02 11:31:35.395721404 +0000 UTC m=+0.187832679 container remove aa700083572191f8880cfda4b8a5e3e02edb4621a25f62ba1cbdcc4365967990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wu, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:31:35 compute-0 systemd[1]: libpod-conmon-aa700083572191f8880cfda4b8a5e3e02edb4621a25f62ba1cbdcc4365967990.scope: Deactivated successfully.
Feb 02 11:31:35 compute-0 podman[254139]: 2026-02-02 11:31:35.541348442 +0000 UTC m=+0.045908878 container create 619cd1e35495242b224206637ffbbba1877dad08ed6b7c65c7b165239e444472 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hofstadter, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb 02 11:31:35 compute-0 systemd[1]: Started libpod-conmon-619cd1e35495242b224206637ffbbba1877dad08ed6b7c65c7b165239e444472.scope.
Feb 02 11:31:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e684bb198ddf65e968bf396bd9e4a10c60ff71e2b1f124ec4c25f92ae3986584/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e684bb198ddf65e968bf396bd9e4a10c60ff71e2b1f124ec4c25f92ae3986584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e684bb198ddf65e968bf396bd9e4a10c60ff71e2b1f124ec4c25f92ae3986584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e684bb198ddf65e968bf396bd9e4a10c60ff71e2b1f124ec4c25f92ae3986584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e684bb198ddf65e968bf396bd9e4a10c60ff71e2b1f124ec4c25f92ae3986584/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:35 compute-0 podman[254139]: 2026-02-02 11:31:35.52328688 +0000 UTC m=+0.027847336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:31:35 compute-0 podman[254139]: 2026-02-02 11:31:35.627968265 +0000 UTC m=+0.132528721 container init 619cd1e35495242b224206637ffbbba1877dad08ed6b7c65c7b165239e444472 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:31:35 compute-0 podman[254139]: 2026-02-02 11:31:35.639888239 +0000 UTC m=+0.144448675 container start 619cd1e35495242b224206637ffbbba1877dad08ed6b7c65c7b165239e444472 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hofstadter, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:31:35 compute-0 podman[254139]: 2026-02-02 11:31:35.645370558 +0000 UTC m=+0.149930994 container attach 619cd1e35495242b224206637ffbbba1877dad08ed6b7c65c7b165239e444472 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb 02 11:31:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:35 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d8003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:35 compute-0 hungry_hofstadter[254157]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:31:35 compute-0 hungry_hofstadter[254157]: --> All data devices are unavailable
Feb 02 11:31:35 compute-0 systemd[1]: libpod-619cd1e35495242b224206637ffbbba1877dad08ed6b7c65c7b165239e444472.scope: Deactivated successfully.
Feb 02 11:31:36 compute-0 podman[254172]: 2026-02-02 11:31:36.034971486 +0000 UTC m=+0.030502672 container died 619cd1e35495242b224206637ffbbba1877dad08ed6b7c65c7b165239e444472 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:31:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e684bb198ddf65e968bf396bd9e4a10c60ff71e2b1f124ec4c25f92ae3986584-merged.mount: Deactivated successfully.
Feb 02 11:31:36 compute-0 podman[254172]: 2026-02-02 11:31:36.082432718 +0000 UTC m=+0.077963884 container remove 619cd1e35495242b224206637ffbbba1877dad08ed6b7c65c7b165239e444472 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:31:36 compute-0 systemd[1]: libpod-conmon-619cd1e35495242b224206637ffbbba1877dad08ed6b7c65c7b165239e444472.scope: Deactivated successfully.
Feb 02 11:31:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:36 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:36 compute-0 sudo[254031]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:36 compute-0 sudo[254185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:31:36 compute-0 sudo[254185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:36 compute-0 sudo[254185]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:36 compute-0 sudo[254210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:31:36 compute-0 sudo[254210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:36 compute-0 podman[254277]: 2026-02-02 11:31:36.683853627 +0000 UTC m=+0.043793437 container create 18fb385deead7e245f5f09ee7834dd987a7ccbafa1d1a5e8775466a40daea262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcnulty, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb 02 11:31:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:36.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:36.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:36 compute-0 systemd[1]: Started libpod-conmon-18fb385deead7e245f5f09ee7834dd987a7ccbafa1d1a5e8775466a40daea262.scope.
Feb 02 11:31:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:31:36 compute-0 podman[254277]: 2026-02-02 11:31:36.665796225 +0000 UTC m=+0.025736055 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:31:36 compute-0 podman[254277]: 2026-02-02 11:31:36.770236133 +0000 UTC m=+0.130175983 container init 18fb385deead7e245f5f09ee7834dd987a7ccbafa1d1a5e8775466a40daea262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcnulty, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:31:36 compute-0 podman[254277]: 2026-02-02 11:31:36.778419159 +0000 UTC m=+0.138358969 container start 18fb385deead7e245f5f09ee7834dd987a7ccbafa1d1a5e8775466a40daea262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 02 11:31:36 compute-0 podman[254277]: 2026-02-02 11:31:36.782338713 +0000 UTC m=+0.142278553 container attach 18fb385deead7e245f5f09ee7834dd987a7ccbafa1d1a5e8775466a40daea262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:31:36 compute-0 systemd[1]: libpod-18fb385deead7e245f5f09ee7834dd987a7ccbafa1d1a5e8775466a40daea262.scope: Deactivated successfully.
Feb 02 11:31:36 compute-0 xenodochial_mcnulty[254293]: 167 167
Feb 02 11:31:36 compute-0 conmon[254293]: conmon 18fb385deead7e245f5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18fb385deead7e245f5f09ee7834dd987a7ccbafa1d1a5e8775466a40daea262.scope/container/memory.events
Feb 02 11:31:36 compute-0 podman[254277]: 2026-02-02 11:31:36.785210166 +0000 UTC m=+0.145149976 container died 18fb385deead7e245f5f09ee7834dd987a7ccbafa1d1a5e8775466a40daea262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:31:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-de68938cd48a363aabe9e3adb927405b8613873a02aa5ec19e848072016ae342-merged.mount: Deactivated successfully.
Feb 02 11:31:36 compute-0 podman[254277]: 2026-02-02 11:31:36.826431107 +0000 UTC m=+0.186370917 container remove 18fb385deead7e245f5f09ee7834dd987a7ccbafa1d1a5e8775466a40daea262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb 02 11:31:36 compute-0 systemd[1]: libpod-conmon-18fb385deead7e245f5f09ee7834dd987a7ccbafa1d1a5e8775466a40daea262.scope: Deactivated successfully.
Feb 02 11:31:36 compute-0 podman[254318]: 2026-02-02 11:31:36.965215897 +0000 UTC m=+0.044552558 container create 48361346cdbdee956773ad3a7ceda0ff129da5d1c6a1bac8690c5910be813b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_beaver, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:31:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:36] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:31:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:36] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:31:37 compute-0 systemd[1]: Started libpod-conmon-48361346cdbdee956773ad3a7ceda0ff129da5d1c6a1bac8690c5910be813b18.scope.
Feb 02 11:31:37 compute-0 podman[254318]: 2026-02-02 11:31:36.942093339 +0000 UTC m=+0.021429990 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:31:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df386ecdfe4d657768377f0ead93b8413e42f8276e95fa59b8f43486b530a02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df386ecdfe4d657768377f0ead93b8413e42f8276e95fa59b8f43486b530a02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df386ecdfe4d657768377f0ead93b8413e42f8276e95fa59b8f43486b530a02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df386ecdfe4d657768377f0ead93b8413e42f8276e95fa59b8f43486b530a02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:37 compute-0 podman[254318]: 2026-02-02 11:31:37.066451612 +0000 UTC m=+0.145788273 container init 48361346cdbdee956773ad3a7ceda0ff129da5d1c6a1bac8690c5910be813b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:31:37 compute-0 podman[254318]: 2026-02-02 11:31:37.073431054 +0000 UTC m=+0.152767695 container start 48361346cdbdee956773ad3a7ceda0ff129da5d1c6a1bac8690c5910be813b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:31:37 compute-0 podman[254318]: 2026-02-02 11:31:37.077304036 +0000 UTC m=+0.156640837 container attach 48361346cdbdee956773ad3a7ceda0ff129da5d1c6a1bac8690c5910be813b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_beaver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:31:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:31:37.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:31:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:37 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:37 compute-0 ceph-mon[74676]: pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]: {
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:     "1": [
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:         {
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "devices": [
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "/dev/loop3"
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             ],
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "lv_name": "ceph_lv0",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "lv_size": "21470642176",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "name": "ceph_lv0",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "tags": {
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.cluster_name": "ceph",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.crush_device_class": "",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.encrypted": "0",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.osd_id": "1",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.type": "block",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.vdo": "0",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:                 "ceph.with_tpm": "0"
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             },
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "type": "block",
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:             "vg_name": "ceph_vg0"
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:         }
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]:     ]
Feb 02 11:31:37 compute-0 dazzling_beaver[254336]: }
Feb 02 11:31:37 compute-0 systemd[1]: libpod-48361346cdbdee956773ad3a7ceda0ff129da5d1c6a1bac8690c5910be813b18.scope: Deactivated successfully.
Feb 02 11:31:37 compute-0 podman[254318]: 2026-02-02 11:31:37.391175016 +0000 UTC m=+0.470511657 container died 48361346cdbdee956773ad3a7ceda0ff129da5d1c6a1bac8690c5910be813b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_beaver, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:31:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8df386ecdfe4d657768377f0ead93b8413e42f8276e95fa59b8f43486b530a02-merged.mount: Deactivated successfully.
Feb 02 11:31:37 compute-0 podman[254318]: 2026-02-02 11:31:37.486190911 +0000 UTC m=+0.565527552 container remove 48361346cdbdee956773ad3a7ceda0ff129da5d1c6a1bac8690c5910be813b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:31:37 compute-0 systemd[1]: libpod-conmon-48361346cdbdee956773ad3a7ceda0ff129da5d1c6a1bac8690c5910be813b18.scope: Deactivated successfully.
Feb 02 11:31:37 compute-0 sudo[254210]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:37 compute-0 sudo[254360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:31:37 compute-0 sudo[254360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:37 compute-0 sudo[254360]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:37 compute-0 sudo[254385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:31:37 compute-0 sudo[254385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:37 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f80034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:38 compute-0 podman[254449]: 2026-02-02 11:31:38.075439959 +0000 UTC m=+0.041789819 container create 3f66815e32ebcfeecf7fdffd0c7c540dbbfec432cc142345d9c1ef8d05ca2c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_burnell, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb 02 11:31:38 compute-0 systemd[1]: Started libpod-conmon-3f66815e32ebcfeecf7fdffd0c7c540dbbfec432cc142345d9c1ef8d05ca2c06.scope.
Feb 02 11:31:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:31:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:38 compute-0 podman[254449]: 2026-02-02 11:31:38.055565555 +0000 UTC m=+0.021915435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:31:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:38 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d8003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:38 compute-0 podman[254449]: 2026-02-02 11:31:38.165497911 +0000 UTC m=+0.131847771 container init 3f66815e32ebcfeecf7fdffd0c7c540dbbfec432cc142345d9c1ef8d05ca2c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_burnell, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:31:38 compute-0 podman[254449]: 2026-02-02 11:31:38.1741108 +0000 UTC m=+0.140460660 container start 3f66815e32ebcfeecf7fdffd0c7c540dbbfec432cc142345d9c1ef8d05ca2c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_burnell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:31:38 compute-0 frosty_burnell[254465]: 167 167
Feb 02 11:31:38 compute-0 systemd[1]: libpod-3f66815e32ebcfeecf7fdffd0c7c540dbbfec432cc142345d9c1ef8d05ca2c06.scope: Deactivated successfully.
Feb 02 11:31:38 compute-0 podman[254449]: 2026-02-02 11:31:38.179549797 +0000 UTC m=+0.145899667 container attach 3f66815e32ebcfeecf7fdffd0c7c540dbbfec432cc142345d9c1ef8d05ca2c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_burnell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:31:38 compute-0 podman[254449]: 2026-02-02 11:31:38.180054262 +0000 UTC m=+0.146404122 container died 3f66815e32ebcfeecf7fdffd0c7c540dbbfec432cc142345d9c1ef8d05ca2c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:31:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1233a1965de94cd96183feed9374b8f2d28e473a6b55a646b1791d2e89c24fd1-merged.mount: Deactivated successfully.
Feb 02 11:31:38 compute-0 podman[254449]: 2026-02-02 11:31:38.259874567 +0000 UTC m=+0.226224427 container remove 3f66815e32ebcfeecf7fdffd0c7c540dbbfec432cc142345d9c1ef8d05ca2c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 11:31:38 compute-0 systemd[1]: libpod-conmon-3f66815e32ebcfeecf7fdffd0c7c540dbbfec432cc142345d9c1ef8d05ca2c06.scope: Deactivated successfully.
Feb 02 11:31:38 compute-0 podman[254489]: 2026-02-02 11:31:38.449095605 +0000 UTC m=+0.080740614 container create 2399d80cb75c1f547d80520b7afe5bdb7127e4157382abbd491cac91b40f8ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:31:38 compute-0 podman[254489]: 2026-02-02 11:31:38.390619645 +0000 UTC m=+0.022264744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:31:38 compute-0 systemd[1]: Started libpod-conmon-2399d80cb75c1f547d80520b7afe5bdb7127e4157382abbd491cac91b40f8ba7.scope.
Feb 02 11:31:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610136eda887e7a49323502cbedd349443cd1d96eb8e1fadbe1864149ef042b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610136eda887e7a49323502cbedd349443cd1d96eb8e1fadbe1864149ef042b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610136eda887e7a49323502cbedd349443cd1d96eb8e1fadbe1864149ef042b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610136eda887e7a49323502cbedd349443cd1d96eb8e1fadbe1864149ef042b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:31:38 compute-0 podman[254489]: 2026-02-02 11:31:38.539633551 +0000 UTC m=+0.171278570 container init 2399d80cb75c1f547d80520b7afe5bdb7127e4157382abbd491cac91b40f8ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:31:38 compute-0 podman[254489]: 2026-02-02 11:31:38.548348593 +0000 UTC m=+0.179993612 container start 2399d80cb75c1f547d80520b7afe5bdb7127e4157382abbd491cac91b40f8ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:31:38 compute-0 podman[254489]: 2026-02-02 11:31:38.552324428 +0000 UTC m=+0.183969447 container attach 2399d80cb75c1f547d80520b7afe5bdb7127e4157382abbd491cac91b40f8ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:31:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:38.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:38.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:39 compute-0 lvm[254581]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:31:39 compute-0 lvm[254581]: VG ceph_vg0 finished
Feb 02 11:31:39 compute-0 adoring_blackwell[254506]: {}
Feb 02 11:31:39 compute-0 systemd[1]: libpod-2399d80cb75c1f547d80520b7afe5bdb7127e4157382abbd491cac91b40f8ba7.scope: Deactivated successfully.
Feb 02 11:31:39 compute-0 podman[254489]: 2026-02-02 11:31:39.257479865 +0000 UTC m=+0.889124904 container died 2399d80cb75c1f547d80520b7afe5bdb7127e4157382abbd491cac91b40f8ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:31:39 compute-0 systemd[1]: libpod-2399d80cb75c1f547d80520b7afe5bdb7127e4157382abbd491cac91b40f8ba7.scope: Consumed 1.031s CPU time.
Feb 02 11:31:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:39 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:39 compute-0 ceph-mon[74676]: pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-610136eda887e7a49323502cbedd349443cd1d96eb8e1fadbe1864149ef042b7-merged.mount: Deactivated successfully.
Feb 02 11:31:39 compute-0 podman[254489]: 2026-02-02 11:31:39.486229795 +0000 UTC m=+1.117874814 container remove 2399d80cb75c1f547d80520b7afe5bdb7127e4157382abbd491cac91b40f8ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:31:39 compute-0 systemd[1]: libpod-conmon-2399d80cb75c1f547d80520b7afe5bdb7127e4157382abbd491cac91b40f8ba7.scope: Deactivated successfully.
Feb 02 11:31:39 compute-0 sudo[254385]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:31:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:31:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:31:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:31:39 compute-0 sudo[254600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:31:39 compute-0 sudo[254600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:39 compute-0 sudo[254600]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:39 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113140 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:31:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:40 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:31:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:31:40 compute-0 ceph-mon[74676]: pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:40.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:40.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:41 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d8003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:41 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:42 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f80034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:42.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:42.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:43 compute-0 ceph-mon[74676]: pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:31:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:43 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:43 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d8003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:31:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:44 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3607207810' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:31:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3607207810' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:31:44 compute-0 sudo[254629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:31:44 compute-0 sudo[254629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:31:44 compute-0 sudo[254629]: pam_unix(sudo:session): session closed for user root
Feb 02 11:31:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:31:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:31:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:44.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:44.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:45 compute-0 ceph-mon[74676]: pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:31:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:31:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:45 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f80034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:45 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:31:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:46 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff100002010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:46.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:46.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:46] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Feb 02 11:31:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:46] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Feb 02 11:31:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:31:47.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:31:47 compute-0 ceph-mon[74676]: pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Feb 02 11:31:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:47 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:47 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0d0000d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:48 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0f4002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:48 compute-0 ceph-mon[74676]: pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:48.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:48.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:49 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff100002010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:31:49 compute-0 kernel: ganesha.nfsd[254655]: segfault at 50 ip 00007ff18881e32e sp 00007ff0ed7f9210 error 4 in libntirpc.so.5.8[7ff188803000+2c000] likely on CPU 3 (core 0, socket 3)
Feb 02 11:31:49 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb 02 11:31:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[253696]: 02/02/2026 11:31:49 : epoch 69808af5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff0e0003c10 fd 48 proxy ignored for local
Feb 02 11:31:49 compute-0 systemd[1]: Started Process Core Dump (PID 254662/UID 0).
Feb 02 11:31:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000030s ======
Feb 02 11:31:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:50.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Feb 02 11:31:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:50.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:50 compute-0 systemd-coredump[254663]: Process 253700 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 56:
                                                    #0  0x00007ff18881e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    #1  0x0000000000000000 n/a (n/a + 0x0)
                                                    #2  0x00007ff188828900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)
                                                    ELF object binary architecture: AMD x86-64
Feb 02 11:31:50 compute-0 systemd[1]: systemd-coredump@6-254662-0.service: Deactivated successfully.
Feb 02 11:31:50 compute-0 podman[254669]: 2026-02-02 11:31:50.984435269 +0000 UTC m=+0.032188161 container died 16cb242c4f8a41bb2fd157ce2f5f7229088e534c725300c95895c8054494e1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dd1e59e7a7c7fe196da01a9ed9b356cd5efd123ce0e100ad60d28007e5bfa93-merged.mount: Deactivated successfully.
Feb 02 11:31:51 compute-0 podman[254669]: 2026-02-02 11:31:51.019628906 +0000 UTC m=+0.067381798 container remove 16cb242c4f8a41bb2fd157ce2f5f7229088e534c725300c95895c8054494e1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:31:51 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Main process exited, code=exited, status=139/n/a
Feb 02 11:31:51 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Failed with result 'exit-code'.
Feb 02 11:31:51 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.132s CPU time.
Feb 02 11:31:51 compute-0 ceph-mon[74676]: pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:52.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:52.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:53 compute-0 ceph-mon[74676]: pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:31:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:31:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:54.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:54.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113155 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:31:55 compute-0 ceph-mon[74676]: pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:31:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:31:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:31:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:31:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:56.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:31:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:56.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:56] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Feb 02 11:31:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:31:56] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Feb 02 11:31:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:31:57.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:31:57 compute-0 ceph-mon[74676]: pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:31:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Feb 02 11:31:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:31:58.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:31:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:31:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:31:58.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:31:59 compute-0 ceph-mon[74676]: pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Feb 02 11:31:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:31:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:31:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:31:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:31:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:31:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:31:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:31:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:32:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Feb 02 11:32:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:32:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:00.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:32:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:32:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113200 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:32:01 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Scheduled restart job, restart counter is at 7.
Feb 02 11:32:01 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:32:01 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.132s CPU time.
Feb 02 11:32:01 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:32:01 compute-0 podman[254725]: 2026-02-02 11:32:01.293877542 +0000 UTC m=+0.084402910 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 02 11:32:01 compute-0 podman[254727]: 2026-02-02 11:32:01.309180514 +0000 UTC m=+0.090139576 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 02 11:32:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:01 compute-0 podman[254810]: 2026-02-02 11:32:01.454918706 +0000 UTC m=+0.094486782 container create 005c2522023edeb11578fce7b7a84b0131bc1e7d0af8f534db44206e936935b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:32:01 compute-0 podman[254810]: 2026-02-02 11:32:01.383666497 +0000 UTC m=+0.023234593 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04158b25fe6d86efabab796f6df44d9622ba8e91701e1c797e336f2a8894b5b1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04158b25fe6d86efabab796f6df44d9622ba8e91701e1c797e336f2a8894b5b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04158b25fe6d86efabab796f6df44d9622ba8e91701e1c797e336f2a8894b5b1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04158b25fe6d86efabab796f6df44d9622ba8e91701e1c797e336f2a8894b5b1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.lrvhze-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:01 compute-0 podman[254810]: 2026-02-02 11:32:01.522423016 +0000 UTC m=+0.161991112 container init 005c2522023edeb11578fce7b7a84b0131bc1e7d0af8f534db44206e936935b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:32:01 compute-0 podman[254810]: 2026-02-02 11:32:01.529267574 +0000 UTC m=+0.168835650 container start 005c2522023edeb11578fce7b7a84b0131bc1e7d0af8f534db44206e936935b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:32:01 compute-0 bash[254810]: 005c2522023edeb11578fce7b7a84b0131bc1e7d0af8f534db44206e936935b4
Feb 02 11:32:01 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:32:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:01 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb 02 11:32:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:01 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb 02 11:32:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:01 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb 02 11:32:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:01 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb 02 11:32:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:01 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb 02 11:32:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:01 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb 02 11:32:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:01 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb 02 11:32:01 compute-0 ceph-mon[74676]: pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Feb 02 11:32:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:01 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:32:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:32:02 compute-0 ceph-mon[74676]: pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:32:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:02.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:02.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Feb 02 11:32:04 compute-0 sudo[254870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:32:04 compute-0 sudo[254870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:04 compute-0 sudo[254870]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:04.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:04.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:05 compute-0 ceph-mon[74676]: pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Feb 02 11:32:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:32:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000058s ======
Feb 02 11:32:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:06.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Feb 02 11:32:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:06.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:06] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:32:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:06] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:32:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:32:07.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:32:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:32:07.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:32:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:32:07.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:32:07 compute-0 ceph-mon[74676]: pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:32:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:07 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:32:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:07 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:32:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:32:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:08.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:08.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:09 compute-0 ceph-mon[74676]: pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:32:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:32:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:10.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:11 compute-0 ceph-mon[74676]: pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Feb 02 11:32:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:32:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:32:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:12.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:32:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:12.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:13 compute-0 ceph-mon[74676]: pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:32:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:13 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:32:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:14 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:32:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:32:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:14.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:14.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:15 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:15 compute-0 ceph-mon[74676]: pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:32:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:32:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:15 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264001b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Feb 02 11:32:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:16 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f824c000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:16 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:32:16.593 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:32:16 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:32:16.596 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:32:16 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:32:16.598 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:32:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:16 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:32:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:16 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:32:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:16.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:16.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:16] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:32:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:16] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:32:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:32:17.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:32:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:32:17.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:32:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113217 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:32:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:17 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274002070 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:17 compute-0 ceph-mon[74676]: pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Feb 02 11:32:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:17 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82500016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Feb 02 11:32:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:18 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264002610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:18.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:18.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:19 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f824c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:19 compute-0 ceph-mon[74676]: pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Feb 02 11:32:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:19 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:32:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:19 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274002070 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Feb 02 11:32:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:20 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82500016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:20.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:20.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:21 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264002610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:21 compute-0 ceph-mon[74676]: pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Feb 02 11:32:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:21 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f824c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb 02 11:32:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:22 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274002070 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:32:22.666 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:32:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:32:22.667 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:32:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:32:22.667 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:32:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:22.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:22.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113222 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:32:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:23 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82500016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:23 compute-0 ceph-mon[74676]: pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb 02 11:32:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:23 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264002610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:32:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:24 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f824c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:24 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3797073662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:32:24 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1747527378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:32:24 compute-0 sudo[254930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:32:24 compute-0 sudo[254930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:24 compute-0 sudo[254930]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:24.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:24.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:25 compute-0 nova_compute[251290]: 2026-02-02 11:32:25.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:32:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:25 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274002070 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:25 compute-0 ceph-mon[74676]: pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:32:25 compute-0 ceph-osd[83123]: bluestore.MempoolThread fragmentation_score=0.000024 took=0.000113s
Feb 02 11:32:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:25 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:26 compute-0 nova_compute[251290]: 2026-02-02 11:32:26.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:32:26 compute-0 nova_compute[251290]: 2026-02-02 11:32:26.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:32:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:32:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:26 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264002610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:26.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:26.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:26] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:32:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:26] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Feb 02 11:32:27 compute-0 nova_compute[251290]: 2026-02-02 11:32:27.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:32:27 compute-0 nova_compute[251290]: 2026-02-02 11:32:27.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:32:27 compute-0 nova_compute[251290]: 2026-02-02 11:32:27.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:32:27 compute-0 nova_compute[251290]: 2026-02-02 11:32:27.021 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:32:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:32:27.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:32:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:27 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f824c002cb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:27 compute-0 ceph-mon[74676]: pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb 02 11:32:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/771866178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:32:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:27 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.016 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.039 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.039 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.039 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.039 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.040 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:32:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:32:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:28 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264002610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:28 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3009686122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:32:28 compute-0 ceph-mon[74676]: pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:32:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:32:28 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415216170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.537 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.688 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.689 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4908MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.690 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.690 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:32:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:28.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.754 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.754 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:32:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:28.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:28 compute-0 nova_compute[251290]: 2026-02-02 11:32:28.770 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:32:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:32:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345655653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:32:29 compute-0 nova_compute[251290]: 2026-02-02 11:32:29.241 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:32:29 compute-0 nova_compute[251290]: 2026-02-02 11:32:29.246 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:32:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:29 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1415216170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:32:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3345655653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:32:29
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', '.nfs', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:32:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:32:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:32:29 compute-0 nova_compute[251290]: 2026-02-02 11:32:29.734 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:32:29 compute-0 nova_compute[251290]: 2026-02-02 11:32:29.735 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:32:29 compute-0 nova_compute[251290]: 2026-02-02 11:32:29.736 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:32:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:29 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:32:29 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:32:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:32:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:32:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:32:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:32:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:32:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:32:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:32:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:32:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:32:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:30 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:32:30 compute-0 ceph-mon[74676]: pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Feb 02 11:32:30 compute-0 nova_compute[251290]: 2026-02-02 11:32:30.737 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:32:30 compute-0 nova_compute[251290]: 2026-02-02 11:32:30.738 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:32:30 compute-0 nova_compute[251290]: 2026-02-02 11:32:30.738 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:32:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:30.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:30 compute-0 nova_compute[251290]: 2026-02-02 11:32:30.756 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:32:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:30.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113230 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:32:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:31 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:31 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f824c002cb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Feb 02 11:32:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:32 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:32 compute-0 podman[255007]: 2026-02-02 11:32:32.275792254 +0000 UTC m=+0.056026890 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:32:32 compute-0 podman[255008]: 2026-02-02 11:32:32.303908277 +0000 UTC m=+0.083350270 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 11:32:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:32.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:32.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:33 compute-0 ceph-mon[74676]: pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Feb 02 11:32:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:33 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:33 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:32:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:34 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f824c0039c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:34.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:34.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:35 compute-0 ceph-mon[74676]: pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:32:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:35 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:35 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:32:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:36 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:36 compute-0 ceph-mon[74676]: pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:32:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:36.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:36.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:36] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Feb 02 11:32:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:36] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Feb 02 11:32:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:32:37.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:32:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:37 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:37 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:32:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:38 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:38 compute-0 sshd-session[255057]: Invalid user latitude from 80.94.92.186 port 56180
Feb 02 11:32:38 compute-0 sshd-session[255057]: Connection closed by invalid user latitude 80.94.92.186 port 56180 [preauth]
Feb 02 11:32:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:38.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:38.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:39 compute-0 ceph-mon[74676]: pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:32:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:39 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:39 compute-0 sudo[255062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:32:39 compute-0 sudo[255062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:39 compute-0 sudo[255062]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:39 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:39 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:32:39 compute-0 sudo[255087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Feb 02 11:32:39 compute-0 sudo[255087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:32:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:40 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:32:40 compute-0 sudo[255087]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:40 compute-0 sudo[255130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:32:40 compute-0 sudo[255130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:40 compute-0 sudo[255130]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:40 compute-0 sudo[255155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:32:40 compute-0 sudo[255155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:40.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:40.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:40 compute-0 sudo[255155]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:32:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:32:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:32:40 compute-0 sudo[255211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:32:40 compute-0 sudo[255211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:40 compute-0 sudo[255211]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:40 compute-0 sudo[255236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:32:40 compute-0 sudo[255236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:41 compute-0 ceph-mon[74676]: pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:32:41 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:32:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:41 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:41 compute-0 podman[255305]: 2026-02-02 11:32:41.367004046 +0000 UTC m=+0.041475319 container create deb4502797a5817341b7298a8321da8f24b8be046209f21226a38a1fc1704818 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:32:41 compute-0 systemd[1]: Started libpod-conmon-deb4502797a5817341b7298a8321da8f24b8be046209f21226a38a1fc1704818.scope.
Feb 02 11:32:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:32:41 compute-0 podman[255305]: 2026-02-02 11:32:41.348271975 +0000 UTC m=+0.022743268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:32:41 compute-0 podman[255305]: 2026-02-02 11:32:41.450792147 +0000 UTC m=+0.125263440 container init deb4502797a5817341b7298a8321da8f24b8be046209f21226a38a1fc1704818 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:32:41 compute-0 podman[255305]: 2026-02-02 11:32:41.457496381 +0000 UTC m=+0.131967654 container start deb4502797a5817341b7298a8321da8f24b8be046209f21226a38a1fc1704818 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:32:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:41 compute-0 podman[255305]: 2026-02-02 11:32:41.463192006 +0000 UTC m=+0.137663279 container attach deb4502797a5817341b7298a8321da8f24b8be046209f21226a38a1fc1704818 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:32:41 compute-0 festive_liskov[255322]: 167 167
Feb 02 11:32:41 compute-0 systemd[1]: libpod-deb4502797a5817341b7298a8321da8f24b8be046209f21226a38a1fc1704818.scope: Deactivated successfully.
Feb 02 11:32:41 compute-0 podman[255305]: 2026-02-02 11:32:41.464237746 +0000 UTC m=+0.138709029 container died deb4502797a5817341b7298a8321da8f24b8be046209f21226a38a1fc1704818 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2945528a81c8da4110a5619b837d8a2187e5f011550ac5095de79a677d3c3b9-merged.mount: Deactivated successfully.
Feb 02 11:32:41 compute-0 podman[255305]: 2026-02-02 11:32:41.501549314 +0000 UTC m=+0.176020587 container remove deb4502797a5817341b7298a8321da8f24b8be046209f21226a38a1fc1704818 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:32:41 compute-0 systemd[1]: libpod-conmon-deb4502797a5817341b7298a8321da8f24b8be046209f21226a38a1fc1704818.scope: Deactivated successfully.
Feb 02 11:32:41 compute-0 podman[255348]: 2026-02-02 11:32:41.635758282 +0000 UTC m=+0.044509047 container create c8b5adac0cdf2abc09323569829fe91fb87b98735e4433ad822206697994285b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kare, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:32:41 compute-0 systemd[1]: Started libpod-conmon-c8b5adac0cdf2abc09323569829fe91fb87b98735e4433ad822206697994285b.scope.
Feb 02 11:32:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47adc70533bc867eb58cf5b76fbb5cd1ed2a9df2606a3d1d5facc2a822d512/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47adc70533bc867eb58cf5b76fbb5cd1ed2a9df2606a3d1d5facc2a822d512/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47adc70533bc867eb58cf5b76fbb5cd1ed2a9df2606a3d1d5facc2a822d512/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47adc70533bc867eb58cf5b76fbb5cd1ed2a9df2606a3d1d5facc2a822d512/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47adc70533bc867eb58cf5b76fbb5cd1ed2a9df2606a3d1d5facc2a822d512/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:41 compute-0 podman[255348]: 2026-02-02 11:32:41.617766702 +0000 UTC m=+0.026517487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:32:41 compute-0 podman[255348]: 2026-02-02 11:32:41.718003269 +0000 UTC m=+0.126754054 container init c8b5adac0cdf2abc09323569829fe91fb87b98735e4433ad822206697994285b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kare, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:32:41 compute-0 podman[255348]: 2026-02-02 11:32:41.726296108 +0000 UTC m=+0.135046873 container start c8b5adac0cdf2abc09323569829fe91fb87b98735e4433ad822206697994285b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:32:41 compute-0 podman[255348]: 2026-02-02 11:32:41.732560069 +0000 UTC m=+0.141310864 container attach c8b5adac0cdf2abc09323569829fe91fb87b98735e4433ad822206697994285b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:32:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:41 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:42 compute-0 infallible_kare[255364]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:32:42 compute-0 infallible_kare[255364]: --> All data devices are unavailable
Feb 02 11:32:42 compute-0 systemd[1]: libpod-c8b5adac0cdf2abc09323569829fe91fb87b98735e4433ad822206697994285b.scope: Deactivated successfully.
Feb 02 11:32:42 compute-0 podman[255348]: 2026-02-02 11:32:42.051094224 +0000 UTC m=+0.459845019 container died c8b5adac0cdf2abc09323569829fe91fb87b98735e4433ad822206697994285b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kare, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:32:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b47adc70533bc867eb58cf5b76fbb5cd1ed2a9df2606a3d1d5facc2a822d512-merged.mount: Deactivated successfully.
Feb 02 11:32:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Feb 02 11:32:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:42 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:42 compute-0 podman[255348]: 2026-02-02 11:32:42.26136431 +0000 UTC m=+0.670115075 container remove c8b5adac0cdf2abc09323569829fe91fb87b98735e4433ad822206697994285b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kare, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:32:42 compute-0 systemd[1]: libpod-conmon-c8b5adac0cdf2abc09323569829fe91fb87b98735e4433ad822206697994285b.scope: Deactivated successfully.
Feb 02 11:32:42 compute-0 sudo[255236]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:42 compute-0 sudo[255392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:32:42 compute-0 sudo[255392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:42 compute-0 sudo[255392]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:42 compute-0 sudo[255417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:32:42 compute-0 sudo[255417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:42.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:42.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:42 compute-0 podman[255485]: 2026-02-02 11:32:42.79987144 +0000 UTC m=+0.043922900 container create 4bb2ab8d93e21dc52749c0203d332a5aac97caba5ff26733e7b5502201de531b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:32:42 compute-0 systemd[1]: Started libpod-conmon-4bb2ab8d93e21dc52749c0203d332a5aac97caba5ff26733e7b5502201de531b.scope.
Feb 02 11:32:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:32:42 compute-0 podman[255485]: 2026-02-02 11:32:42.875503765 +0000 UTC m=+0.119555255 container init 4bb2ab8d93e21dc52749c0203d332a5aac97caba5ff26733e7b5502201de531b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sutherland, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:32:42 compute-0 podman[255485]: 2026-02-02 11:32:42.78258011 +0000 UTC m=+0.026631590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:32:42 compute-0 podman[255485]: 2026-02-02 11:32:42.88257834 +0000 UTC m=+0.126629800 container start 4bb2ab8d93e21dc52749c0203d332a5aac97caba5ff26733e7b5502201de531b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sutherland, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:32:42 compute-0 podman[255485]: 2026-02-02 11:32:42.886020119 +0000 UTC m=+0.130071599 container attach 4bb2ab8d93e21dc52749c0203d332a5aac97caba5ff26733e7b5502201de531b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sutherland, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:32:42 compute-0 quizzical_sutherland[255502]: 167 167
Feb 02 11:32:42 compute-0 systemd[1]: libpod-4bb2ab8d93e21dc52749c0203d332a5aac97caba5ff26733e7b5502201de531b.scope: Deactivated successfully.
Feb 02 11:32:42 compute-0 podman[255485]: 2026-02-02 11:32:42.887823951 +0000 UTC m=+0.131875411 container died 4bb2ab8d93e21dc52749c0203d332a5aac97caba5ff26733e7b5502201de531b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Feb 02 11:32:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:42 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:32:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:42 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:32:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc0a6974744f1f5a40a7912dae4a540b6249d9793e7565ce1db692fc67cd592b-merged.mount: Deactivated successfully.
Feb 02 11:32:42 compute-0 podman[255485]: 2026-02-02 11:32:42.932558944 +0000 UTC m=+0.176610404 container remove 4bb2ab8d93e21dc52749c0203d332a5aac97caba5ff26733e7b5502201de531b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 11:32:42 compute-0 systemd[1]: libpod-conmon-4bb2ab8d93e21dc52749c0203d332a5aac97caba5ff26733e7b5502201de531b.scope: Deactivated successfully.
Feb 02 11:32:43 compute-0 podman[255527]: 2026-02-02 11:32:43.070900022 +0000 UTC m=+0.046429003 container create 847825088a24acb39f3a2dc7077c81e85af945cce15c11f17817f1e55ce6ebfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:32:43 compute-0 systemd[1]: Started libpod-conmon-847825088a24acb39f3a2dc7077c81e85af945cce15c11f17817f1e55ce6ebfb.scope.
Feb 02 11:32:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01d6b7a9784c2f37ec49f7c5b210c2f922bbead0f665fd1ef456265394e8639/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01d6b7a9784c2f37ec49f7c5b210c2f922bbead0f665fd1ef456265394e8639/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01d6b7a9784c2f37ec49f7c5b210c2f922bbead0f665fd1ef456265394e8639/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01d6b7a9784c2f37ec49f7c5b210c2f922bbead0f665fd1ef456265394e8639/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:43 compute-0 podman[255527]: 2026-02-02 11:32:43.052483429 +0000 UTC m=+0.028012440 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:32:43 compute-0 podman[255527]: 2026-02-02 11:32:43.154533298 +0000 UTC m=+0.130062309 container init 847825088a24acb39f3a2dc7077c81e85af945cce15c11f17817f1e55ce6ebfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:32:43 compute-0 podman[255527]: 2026-02-02 11:32:43.161451078 +0000 UTC m=+0.136980059 container start 847825088a24acb39f3a2dc7077c81e85af945cce15c11f17817f1e55ce6ebfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:32:43 compute-0 podman[255527]: 2026-02-02 11:32:43.166333859 +0000 UTC m=+0.141862860 container attach 847825088a24acb39f3a2dc7077c81e85af945cce15c11f17817f1e55ce6ebfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb 02 11:32:43 compute-0 ceph-mon[74676]: pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Feb 02 11:32:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:43 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]: {
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:     "1": [
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:         {
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "devices": [
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "/dev/loop3"
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             ],
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "lv_name": "ceph_lv0",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "lv_size": "21470642176",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "name": "ceph_lv0",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "tags": {
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.cluster_name": "ceph",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.crush_device_class": "",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.encrypted": "0",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.osd_id": "1",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.type": "block",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.vdo": "0",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:                 "ceph.with_tpm": "0"
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             },
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "type": "block",
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:             "vg_name": "ceph_vg0"
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:         }
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]:     ]
Feb 02 11:32:43 compute-0 beautiful_dijkstra[255544]: }
Feb 02 11:32:43 compute-0 systemd[1]: libpod-847825088a24acb39f3a2dc7077c81e85af945cce15c11f17817f1e55ce6ebfb.scope: Deactivated successfully.
Feb 02 11:32:43 compute-0 podman[255527]: 2026-02-02 11:32:43.475851633 +0000 UTC m=+0.451380624 container died 847825088a24acb39f3a2dc7077c81e85af945cce15c11f17817f1e55ce6ebfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 11:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f01d6b7a9784c2f37ec49f7c5b210c2f922bbead0f665fd1ef456265394e8639-merged.mount: Deactivated successfully.
Feb 02 11:32:43 compute-0 podman[255527]: 2026-02-02 11:32:43.518030062 +0000 UTC m=+0.493559053 container remove 847825088a24acb39f3a2dc7077c81e85af945cce15c11f17817f1e55ce6ebfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:32:43 compute-0 systemd[1]: libpod-conmon-847825088a24acb39f3a2dc7077c81e85af945cce15c11f17817f1e55ce6ebfb.scope: Deactivated successfully.
Feb 02 11:32:43 compute-0 sudo[255417]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:43 compute-0 sudo[255566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:32:43 compute-0 sudo[255566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:43 compute-0 sudo[255566]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:43 compute-0 sudo[255591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:32:43 compute-0 sudo[255591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:43 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f824c0039c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:32:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1406021116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:32:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:32:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1406021116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:32:44 compute-0 podman[255655]: 2026-02-02 11:32:44.086619752 +0000 UTC m=+0.044026103 container create c8091e3f9cc3e0999e9038aa49768f50be8e5bd08af80a83129500034e52f855 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:32:44 compute-0 systemd[1]: Started libpod-conmon-c8091e3f9cc3e0999e9038aa49768f50be8e5bd08af80a83129500034e52f855.scope.
Feb 02 11:32:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:32:44 compute-0 podman[255655]: 2026-02-02 11:32:44.156341657 +0000 UTC m=+0.113748028 container init c8091e3f9cc3e0999e9038aa49768f50be8e5bd08af80a83129500034e52f855 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:32:44 compute-0 podman[255655]: 2026-02-02 11:32:44.065104331 +0000 UTC m=+0.022510702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:32:44 compute-0 podman[255655]: 2026-02-02 11:32:44.165372058 +0000 UTC m=+0.122778409 container start c8091e3f9cc3e0999e9038aa49768f50be8e5bd08af80a83129500034e52f855 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 11:32:44 compute-0 podman[255655]: 2026-02-02 11:32:44.169388674 +0000 UTC m=+0.126795025 container attach c8091e3f9cc3e0999e9038aa49768f50be8e5bd08af80a83129500034e52f855 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_stonebraker, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:32:44 compute-0 eager_stonebraker[255671]: 167 167
Feb 02 11:32:44 compute-0 systemd[1]: libpod-c8091e3f9cc3e0999e9038aa49768f50be8e5bd08af80a83129500034e52f855.scope: Deactivated successfully.
Feb 02 11:32:44 compute-0 podman[255655]: 2026-02-02 11:32:44.170379493 +0000 UTC m=+0.127785864 container died c8091e3f9cc3e0999e9038aa49768f50be8e5bd08af80a83129500034e52f855 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_stonebraker, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:32:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:32:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-169e759c6586ed55c2dc0e62a5dce0d2ef0ee78c06d09a3c01797fcfd626103d-merged.mount: Deactivated successfully.
Feb 02 11:32:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:44 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:44 compute-0 podman[255655]: 2026-02-02 11:32:44.207147835 +0000 UTC m=+0.164554186 container remove c8091e3f9cc3e0999e9038aa49768f50be8e5bd08af80a83129500034e52f855 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:32:44 compute-0 systemd[1]: libpod-conmon-c8091e3f9cc3e0999e9038aa49768f50be8e5bd08af80a83129500034e52f855.scope: Deactivated successfully.
Feb 02 11:32:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1406021116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:32:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1406021116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:32:44 compute-0 podman[255694]: 2026-02-02 11:32:44.33535317 +0000 UTC m=+0.037712661 container create fefd6c0f6121273abe5895dc847e15b57a2e181825a306db8cac5ea7df49ca52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cannon, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb 02 11:32:44 compute-0 systemd[1]: Started libpod-conmon-fefd6c0f6121273abe5895dc847e15b57a2e181825a306db8cac5ea7df49ca52.scope.
Feb 02 11:32:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75790967a77a0976bc2e6b21197cf75a6ce7eceefe972a3bae540f13570e1e66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75790967a77a0976bc2e6b21197cf75a6ce7eceefe972a3bae540f13570e1e66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75790967a77a0976bc2e6b21197cf75a6ce7eceefe972a3bae540f13570e1e66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75790967a77a0976bc2e6b21197cf75a6ce7eceefe972a3bae540f13570e1e66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:32:44 compute-0 podman[255694]: 2026-02-02 11:32:44.398321019 +0000 UTC m=+0.100680530 container init fefd6c0f6121273abe5895dc847e15b57a2e181825a306db8cac5ea7df49ca52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Feb 02 11:32:44 compute-0 podman[255694]: 2026-02-02 11:32:44.404112087 +0000 UTC m=+0.106471578 container start fefd6c0f6121273abe5895dc847e15b57a2e181825a306db8cac5ea7df49ca52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cannon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:32:44 compute-0 podman[255694]: 2026-02-02 11:32:44.408766761 +0000 UTC m=+0.111126252 container attach fefd6c0f6121273abe5895dc847e15b57a2e181825a306db8cac5ea7df49ca52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:32:44 compute-0 podman[255694]: 2026-02-02 11:32:44.31909742 +0000 UTC m=+0.021456931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:32:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:32:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:32:44 compute-0 sudo[255740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:32:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:44.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:44 compute-0 sudo[255740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:44 compute-0 sudo[255740]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:44.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:44 compute-0 lvm[255810]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:32:44 compute-0 lvm[255810]: VG ceph_vg0 finished
Feb 02 11:32:45 compute-0 heuristic_cannon[255710]: {}
Feb 02 11:32:45 compute-0 systemd[1]: libpod-fefd6c0f6121273abe5895dc847e15b57a2e181825a306db8cac5ea7df49ca52.scope: Deactivated successfully.
Feb 02 11:32:45 compute-0 podman[255694]: 2026-02-02 11:32:45.075068735 +0000 UTC m=+0.777428236 container died fefd6c0f6121273abe5895dc847e15b57a2e181825a306db8cac5ea7df49ca52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:32:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-75790967a77a0976bc2e6b21197cf75a6ce7eceefe972a3bae540f13570e1e66-merged.mount: Deactivated successfully.
Feb 02 11:32:45 compute-0 podman[255694]: 2026-02-02 11:32:45.198206453 +0000 UTC m=+0.900565945 container remove fefd6c0f6121273abe5895dc847e15b57a2e181825a306db8cac5ea7df49ca52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:32:45 compute-0 systemd[1]: libpod-conmon-fefd6c0f6121273abe5895dc847e15b57a2e181825a306db8cac5ea7df49ca52.scope: Deactivated successfully.
Feb 02 11:32:45 compute-0 sudo[255591]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:32:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:45 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:32:45 compute-0 ceph-mon[74676]: pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb 02 11:32:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:32:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:45 compute-0 sudo[255827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:32:45 compute-0 sudo[255827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:32:45 compute-0 sudo[255827]: pam_unix(sudo:session): session closed for user root
Feb 02 11:32:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:45 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:45 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:32:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 938 B/s wr, 71 op/s
Feb 02 11:32:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:46 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8260000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:32:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:46.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:46.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:46] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:32:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:46] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:32:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:32:47.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:32:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:47 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:47 compute-0 ceph-mon[74676]: pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 938 B/s wr, 71 op/s
Feb 02 11:32:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:47 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 938 B/s wr, 71 op/s
Feb 02 11:32:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:48 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:48 compute-0 ceph-mon[74676]: pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 938 B/s wr, 71 op/s
Feb 02 11:32:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:48.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:48.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:49 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8260001930 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:49 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 938 B/s wr, 71 op/s
Feb 02 11:32:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:50 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:50.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:50.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:51 compute-0 ceph-mon[74676]: pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 938 B/s wr, 71 op/s
Feb 02 11:32:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:51 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:51 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8260001930 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 1023 B/s wr, 166 op/s
Feb 02 11:32:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:52 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:52.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:52.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113252 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 7ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:32:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:53 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:53 compute-0 ceph-mon[74676]: pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 1023 B/s wr, 166 op/s
Feb 02 11:32:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:53 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 426 B/s wr, 164 op/s
Feb 02 11:32:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:54 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8260001930 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:54.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:54.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:55 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8274009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:55 compute-0 ceph-mon[74676]: pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 426 B/s wr, 164 op/s
Feb 02 11:32:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:55 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 426 B/s wr, 164 op/s
Feb 02 11:32:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:56 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:32:56 compute-0 ceph-mon[74676]: pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 426 B/s wr, 164 op/s
Feb 02 11:32:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:56.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:32:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:56.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:32:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:56] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:32:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:32:56] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Feb 02 11:32:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:32:57.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:32:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:57 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8260002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:57 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f827400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 85 B/s wr, 95 op/s
Feb 02 11:32:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:58 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:32:58.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:32:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:32:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:32:58.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:32:59 compute-0 ceph-mon[74676]: pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 85 B/s wr, 95 op/s
Feb 02 11:32:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:59 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8260002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:32:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:32:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:32:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:32:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:32:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:32:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:32:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:32:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:32:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:32:59 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f827400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 85 B/s wr, 95 op/s
Feb 02 11:33:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:00 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f827400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:33:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:00.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:00.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:01 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:01 compute-0 ceph-mon[74676]: pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 85 B/s wr, 95 op/s
Feb 02 11:33:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:01 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8260002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 85 B/s wr, 95 op/s
Feb 02 11:33:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:02 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f827400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:33:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:02.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:33:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:02.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:03 compute-0 podman[255870]: 2026-02-02 11:33:03.304992033 +0000 UTC m=+0.089468656 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Feb 02 11:33:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:03 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f827400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:03 compute-0 ceph-mon[74676]: pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 85 B/s wr, 95 op/s
Feb 02 11:33:03 compute-0 podman[255871]: 2026-02-02 11:33:03.383908384 +0000 UTC m=+0.160990323 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 11:33:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:03 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:04 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8260002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:33:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:04.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:33:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:04.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:04 compute-0 sudo[255913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:33:04 compute-0 sudo[255913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:04 compute-0 sudo[255913]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:05 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f827400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:05 compute-0 ceph-mon[74676]: pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:05 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:06 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:06 compute-0 ceph-mon[74676]: pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:33:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:06.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:33:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:33:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Feb 02 11:33:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:33:07.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:33:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:33:07.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:33:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:07 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8260002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:07 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f827400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:08 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8250003c50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:08.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:08.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:09 compute-0 ceph-mon[74676]: pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:09 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8264003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:09 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8260003ea0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:10 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f827400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:10.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:33:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:10.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:33:11 compute-0 ceph-mon[74676]: pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:11 compute-0 kernel: ganesha.nfsd[254906]: segfault at 50 ip 00007f82f4c3632e sp 00007f828cff8210 error 4 in libntirpc.so.5.8[7f82f4c1b000+2c000] likely on CPU 0 (core 0, socket 0)
Feb 02 11:33:11 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb 02 11:33:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[254825]: 02/02/2026 11:33:11 : epoch 69808b31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f827400a2b0 fd 42 proxy ignored for local
Feb 02 11:33:11 compute-0 systemd[1]: Started Process Core Dump (PID 255945/UID 0).
Feb 02 11:33:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:33:12 compute-0 systemd-coredump[255946]: Process 254829 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007f82f4c3632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Feb 02 11:33:12 compute-0 systemd[1]: systemd-coredump@7-255945-0.service: Deactivated successfully.
Feb 02 11:33:12 compute-0 podman[255952]: 2026-02-02 11:33:12.390110475 +0000 UTC m=+0.027787228 container died 005c2522023edeb11578fce7b7a84b0131bc1e7d0af8f534db44206e936935b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-04158b25fe6d86efabab796f6df44d9622ba8e91701e1c797e336f2a8894b5b1-merged.mount: Deactivated successfully.
Feb 02 11:33:12 compute-0 podman[255952]: 2026-02-02 11:33:12.43663877 +0000 UTC m=+0.074315493 container remove 005c2522023edeb11578fce7b7a84b0131bc1e7d0af8f534db44206e936935b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:33:12 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Main process exited, code=exited, status=139/n/a
Feb 02 11:33:12 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Failed with result 'exit-code'.
Feb 02 11:33:12 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.202s CPU time.
Feb 02 11:33:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:12.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:12.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:13 compute-0 ceph-mon[74676]: pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:33:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:33:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:33:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:33:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:14.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:33:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:14.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:15 compute-0 ceph-mon[74676]: pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:33:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:16 compute-0 ceph-mon[74676]: pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:33:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:16.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:33:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:16.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:16] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:33:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:16] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:33:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:33:17.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:33:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:33:17.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:33:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113317 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:33:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:18.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:18.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:19 compute-0 ceph-mon[74676]: pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:20.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:20.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:21 compute-0 ceph-mon[74676]: pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:33:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:33:22.667 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:33:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:33:22.668 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:33:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:33:22.668 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:33:22 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Scheduled restart job, restart counter is at 8.
Feb 02 11:33:22 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:33:22 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.202s CPU time.
Feb 02 11:33:22 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:33:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:22.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:22.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:22 compute-0 podman[256056]: 2026-02-02 11:33:22.886162408 +0000 UTC m=+0.050036056 container create e2adabce5391d08dfb5d3880be2f26e3feb6bb275a59e130977f9e9e900f66cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb 02 11:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cb30dcd659472d6294ef4bbd90be22f50f5b2c39c4a9c8dbe4121ff1109ea68/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cb30dcd659472d6294ef4bbd90be22f50f5b2c39c4a9c8dbe4121ff1109ea68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cb30dcd659472d6294ef4bbd90be22f50f5b2c39c4a9c8dbe4121ff1109ea68/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cb30dcd659472d6294ef4bbd90be22f50f5b2c39c4a9c8dbe4121ff1109ea68/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.lrvhze-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:22 compute-0 podman[256056]: 2026-02-02 11:33:22.859478493 +0000 UTC m=+0.023352161 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:33:22 compute-0 podman[256056]: 2026-02-02 11:33:22.959586435 +0000 UTC m=+0.123460093 container init e2adabce5391d08dfb5d3880be2f26e3feb6bb275a59e130977f9e9e900f66cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:33:22 compute-0 podman[256056]: 2026-02-02 11:33:22.963654442 +0000 UTC m=+0.127528090 container start e2adabce5391d08dfb5d3880be2f26e3feb6bb275a59e130977f9e9e900f66cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:33:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:22 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb 02 11:33:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:22 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb 02 11:33:22 compute-0 bash[256056]: e2adabce5391d08dfb5d3880be2f26e3feb6bb275a59e130977f9e9e900f66cc
Feb 02 11:33:22 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:33:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:23 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb 02 11:33:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:23 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb 02 11:33:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:23 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb 02 11:33:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:23 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb 02 11:33:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:23 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb 02 11:33:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:23 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:33:23 compute-0 ceph-mon[74676]: pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb 02 11:33:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:33:24 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3792165765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:33:24 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4215007078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:33:24 compute-0 ceph-mon[74676]: pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:33:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:24.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:24.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:24 compute-0 sudo[256117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:33:24 compute-0 sudo[256117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:24 compute-0 sudo[256117]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:24 compute-0 sshd-session[256115]: Received disconnect from 195.178.110.15 port 59830:11:  [preauth]
Feb 02 11:33:24 compute-0 sshd-session[256115]: Disconnected from authenticating user root 195.178.110.15 port 59830 [preauth]
Feb 02 11:33:26 compute-0 nova_compute[251290]: 2026-02-02 11:33:26.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:33:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:33:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:33:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:26.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:33:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:26.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:26] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:33:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:26] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:33:27 compute-0 nova_compute[251290]: 2026-02-02 11:33:27.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:33:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:33:27.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:33:27 compute-0 ceph-mon[74676]: pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:33:28 compute-0 nova_compute[251290]: 2026-02-02 11:33:28.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:33:28 compute-0 nova_compute[251290]: 2026-02-02 11:33:28.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:33:28 compute-0 nova_compute[251290]: 2026-02-02 11:33:28.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:33:28 compute-0 nova_compute[251290]: 2026-02-02 11:33:28.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:33:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:33:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:28.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:33:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:28.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.014 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.034 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.035 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.214 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.214 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.214 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.214 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.215 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:33:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:29 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:33:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:29 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:33:29 compute-0 ceph-mon[74676]: pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:33:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1849473071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:33:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/268568226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:33:29
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'backups', '.nfs', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:33:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:33:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:33:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:33:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346354948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.716 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:33:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.881 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.883 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4859MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.883 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.884 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.945 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.946 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:33:29 compute-0 nova_compute[251290]: 2026-02-02 11:33:29.961 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:33:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:33:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:33:30 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1346354948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:33:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:33:30 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3116858104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:33:30 compute-0 nova_compute[251290]: 2026-02-02 11:33:30.450 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:33:30 compute-0 nova_compute[251290]: 2026-02-02 11:33:30.456 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:33:30 compute-0 nova_compute[251290]: 2026-02-02 11:33:30.472 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:33:30 compute-0 nova_compute[251290]: 2026-02-02 11:33:30.474 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:33:30 compute-0 nova_compute[251290]: 2026-02-02 11:33:30.474 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:33:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:30.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:30.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:31 compute-0 ceph-mon[74676]: pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:33:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3116858104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:33:31 compute-0 nova_compute[251290]: 2026-02-02 11:33:31.473 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:33:31 compute-0 nova_compute[251290]: 2026-02-02 11:33:31.473 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:33:31 compute-0 nova_compute[251290]: 2026-02-02 11:33:31.474 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:33:31 compute-0 nova_compute[251290]: 2026-02-02 11:33:31.474 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:33:31 compute-0 nova_compute[251290]: 2026-02-02 11:33:31.489 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:33:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:33:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:32.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:32.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:33 compute-0 ceph-mon[74676]: pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:33:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:33:34 compute-0 podman[256197]: 2026-02-02 11:33:34.271693091 +0000 UTC m=+0.059827628 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 02 11:33:34 compute-0 podman[256198]: 2026-02-02 11:33:34.299636543 +0000 UTC m=+0.083113076 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 11:33:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:34.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:34.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:34 compute-0 ceph-mon[74676]: pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:33:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:36 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:36.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:36.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:36] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:33:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:36] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Feb 02 11:33:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:33:37.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:33:37 compute-0 ceph-mon[74676]: pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:33:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113337 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:33:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:37 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:37 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:33:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:38 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:38.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:38.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:39 compute-0 ceph-mon[74676]: pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:33:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:39 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:39 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00400016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:33:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:40 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:40.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:40.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:41 compute-0 ceph-mon[74676]: pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:33:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:41 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:41 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:33:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:42 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00400016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:42.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:33:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:42.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:33:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:43 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:43 compute-0 ceph-mon[74676]: pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb 02 11:33:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:43 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:33:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:44 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1297596244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:33:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1297596244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:33:44 compute-0 ceph-mon[74676]: pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:33:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:33:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:33:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:33:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:44.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:33:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:44 compute-0 sudo[256269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:33:44 compute-0 sudo[256269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:44 compute-0 sudo[256269]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:45 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00400016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:33:45 compute-0 sudo[256295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:33:45 compute-0 sudo[256295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:45 compute-0 sudo[256295]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:45 compute-0 sudo[256320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:33:45 compute-0 sudo[256320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:45 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:46 compute-0 sudo[256320]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:33:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:33:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:33:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:46 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:33:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:33:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:33:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:33:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:33:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:33:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:33:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:33:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:33:46 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:33:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:33:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:33:46 compute-0 sudo[256378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:33:46 compute-0 sudo[256378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:46 compute-0 sudo[256378]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:46 compute-0 sudo[256403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:33:46 compute-0 sudo[256403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:46 compute-0 ceph-mon[74676]: pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb 02 11:33:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:33:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:33:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:33:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:33:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:33:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:33:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:33:46 compute-0 podman[256467]: 2026-02-02 11:33:46.763009643 +0000 UTC m=+0.038418164 container create b5ffcfa2a2fe4a387de5cdecd3a1a754af96d7337b8dc9b14110a70970afd5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:33:46 compute-0 systemd[1]: Started libpod-conmon-b5ffcfa2a2fe4a387de5cdecd3a1a754af96d7337b8dc9b14110a70970afd5b5.scope.
Feb 02 11:33:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:33:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:46.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:33:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:33:46 compute-0 podman[256467]: 2026-02-02 11:33:46.746337365 +0000 UTC m=+0.021745906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:33:46 compute-0 podman[256467]: 2026-02-02 11:33:46.85316787 +0000 UTC m=+0.128576411 container init b5ffcfa2a2fe4a387de5cdecd3a1a754af96d7337b8dc9b14110a70970afd5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_tu, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:33:46 compute-0 podman[256467]: 2026-02-02 11:33:46.860070788 +0000 UTC m=+0.135479299 container start b5ffcfa2a2fe4a387de5cdecd3a1a754af96d7337b8dc9b14110a70970afd5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_tu, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:33:46 compute-0 podman[256467]: 2026-02-02 11:33:46.864021662 +0000 UTC m=+0.139430183 container attach b5ffcfa2a2fe4a387de5cdecd3a1a754af96d7337b8dc9b14110a70970afd5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_tu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:33:46 compute-0 busy_tu[256484]: 167 167
Feb 02 11:33:46 compute-0 systemd[1]: libpod-b5ffcfa2a2fe4a387de5cdecd3a1a754af96d7337b8dc9b14110a70970afd5b5.scope: Deactivated successfully.
Feb 02 11:33:46 compute-0 podman[256467]: 2026-02-02 11:33:46.865460023 +0000 UTC m=+0.140868544 container died b5ffcfa2a2fe4a387de5cdecd3a1a754af96d7337b8dc9b14110a70970afd5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:33:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:46.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-371996792e721c5715f2c4c9c37657d51146558e1c430e092baae7337a0a4ceb-merged.mount: Deactivated successfully.
Feb 02 11:33:46 compute-0 podman[256467]: 2026-02-02 11:33:46.987297429 +0000 UTC m=+0.262705950 container remove b5ffcfa2a2fe4a387de5cdecd3a1a754af96d7337b8dc9b14110a70970afd5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:33:46 compute-0 systemd[1]: libpod-conmon-b5ffcfa2a2fe4a387de5cdecd3a1a754af96d7337b8dc9b14110a70970afd5b5.scope: Deactivated successfully.
Feb 02 11:33:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:46] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:33:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:46] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:33:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:33:47.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:33:47 compute-0 podman[256509]: 2026-02-02 11:33:47.139424524 +0000 UTC m=+0.058097307 container create 757ff5eb9818913f6a93d7491c84fdd4da27b5d1bc400b22db1af1a047d54b4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb 02 11:33:47 compute-0 systemd[1]: Started libpod-conmon-757ff5eb9818913f6a93d7491c84fdd4da27b5d1bc400b22db1af1a047d54b4d.scope.
Feb 02 11:33:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9996b654b22d2d481efc8f537a182dc26945456cd9ceb4de6f3b3046eb8ac2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9996b654b22d2d481efc8f537a182dc26945456cd9ceb4de6f3b3046eb8ac2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9996b654b22d2d481efc8f537a182dc26945456cd9ceb4de6f3b3046eb8ac2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9996b654b22d2d481efc8f537a182dc26945456cd9ceb4de6f3b3046eb8ac2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9996b654b22d2d481efc8f537a182dc26945456cd9ceb4de6f3b3046eb8ac2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:47 compute-0 podman[256509]: 2026-02-02 11:33:47.112040338 +0000 UTC m=+0.030713151 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:33:47 compute-0 podman[256509]: 2026-02-02 11:33:47.21699212 +0000 UTC m=+0.135664923 container init 757ff5eb9818913f6a93d7491c84fdd4da27b5d1bc400b22db1af1a047d54b4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:33:47 compute-0 podman[256509]: 2026-02-02 11:33:47.224721671 +0000 UTC m=+0.143394464 container start 757ff5eb9818913f6a93d7491c84fdd4da27b5d1bc400b22db1af1a047d54b4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb 02 11:33:47 compute-0 podman[256509]: 2026-02-02 11:33:47.228308604 +0000 UTC m=+0.146981387 container attach 757ff5eb9818913f6a93d7491c84fdd4da27b5d1bc400b22db1af1a047d54b4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:33:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:47 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:47 compute-0 jolly_swartz[256526]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:33:47 compute-0 jolly_swartz[256526]: --> All data devices are unavailable
Feb 02 11:33:47 compute-0 systemd[1]: libpod-757ff5eb9818913f6a93d7491c84fdd4da27b5d1bc400b22db1af1a047d54b4d.scope: Deactivated successfully.
Feb 02 11:33:47 compute-0 podman[256509]: 2026-02-02 11:33:47.535963103 +0000 UTC m=+0.454635886 container died 757ff5eb9818913f6a93d7491c84fdd4da27b5d1bc400b22db1af1a047d54b4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:33:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb9996b654b22d2d481efc8f537a182dc26945456cd9ceb4de6f3b3046eb8ac2-merged.mount: Deactivated successfully.
Feb 02 11:33:47 compute-0 podman[256509]: 2026-02-02 11:33:47.579648537 +0000 UTC m=+0.498321320 container remove 757ff5eb9818913f6a93d7491c84fdd4da27b5d1bc400b22db1af1a047d54b4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:33:47 compute-0 systemd[1]: libpod-conmon-757ff5eb9818913f6a93d7491c84fdd4da27b5d1bc400b22db1af1a047d54b4d.scope: Deactivated successfully.
Feb 02 11:33:47 compute-0 sudo[256403]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:47 compute-0 sudo[256553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:33:47 compute-0 sudo[256553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:47 compute-0 sudo[256553]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:47 compute-0 sudo[256578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:33:47 compute-0 sudo[256578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:47 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:48 compute-0 podman[256643]: 2026-02-02 11:33:48.091891066 +0000 UTC m=+0.039293699 container create 47afe7ba1dbfda663800c3d1a42f9eae33e62f8fca98594e6d046645eb671100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:33:48 compute-0 systemd[1]: Started libpod-conmon-47afe7ba1dbfda663800c3d1a42f9eae33e62f8fca98594e6d046645eb671100.scope.
Feb 02 11:33:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:33:48 compute-0 podman[256643]: 2026-02-02 11:33:48.169119252 +0000 UTC m=+0.116521905 container init 47afe7ba1dbfda663800c3d1a42f9eae33e62f8fca98594e6d046645eb671100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:33:48 compute-0 podman[256643]: 2026-02-02 11:33:48.074847717 +0000 UTC m=+0.022250360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:33:48 compute-0 podman[256643]: 2026-02-02 11:33:48.176971068 +0000 UTC m=+0.124373701 container start 47afe7ba1dbfda663800c3d1a42f9eae33e62f8fca98594e6d046645eb671100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:33:48 compute-0 podman[256643]: 2026-02-02 11:33:48.180989713 +0000 UTC m=+0.128392346 container attach 47afe7ba1dbfda663800c3d1a42f9eae33e62f8fca98594e6d046645eb671100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 11:33:48 compute-0 affectionate_allen[256660]: 167 167
Feb 02 11:33:48 compute-0 systemd[1]: libpod-47afe7ba1dbfda663800c3d1a42f9eae33e62f8fca98594e6d046645eb671100.scope: Deactivated successfully.
Feb 02 11:33:48 compute-0 podman[256643]: 2026-02-02 11:33:48.183251358 +0000 UTC m=+0.130653991 container died 47afe7ba1dbfda663800c3d1a42f9eae33e62f8fca98594e6d046645eb671100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-511c57decd40d4df2c4a94e5dd3b15bc8e2f76303cb1851108d51a86674293f9-merged.mount: Deactivated successfully.
Feb 02 11:33:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:48 compute-0 podman[256643]: 2026-02-02 11:33:48.216532443 +0000 UTC m=+0.163935076 container remove 47afe7ba1dbfda663800c3d1a42f9eae33e62f8fca98594e6d046645eb671100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:33:48 compute-0 systemd[1]: libpod-conmon-47afe7ba1dbfda663800c3d1a42f9eae33e62f8fca98594e6d046645eb671100.scope: Deactivated successfully.
Feb 02 11:33:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:48 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:48 compute-0 podman[256686]: 2026-02-02 11:33:48.380131238 +0000 UTC m=+0.058266493 container create 7c4f1caf4b95bab21d1c37ac03acacf4297bd7c890caa855be2a0b4efb17f42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:33:48 compute-0 systemd[1]: Started libpod-conmon-7c4f1caf4b95bab21d1c37ac03acacf4297bd7c890caa855be2a0b4efb17f42c.scope.
Feb 02 11:33:48 compute-0 podman[256686]: 2026-02-02 11:33:48.345448992 +0000 UTC m=+0.023584267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:33:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b61b0708693087cd60f54089cabab0d7ae9bd98dca8b1a40f249f9fa1f7861/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b61b0708693087cd60f54089cabab0d7ae9bd98dca8b1a40f249f9fa1f7861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b61b0708693087cd60f54089cabab0d7ae9bd98dca8b1a40f249f9fa1f7861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b61b0708693087cd60f54089cabab0d7ae9bd98dca8b1a40f249f9fa1f7861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:48 compute-0 podman[256686]: 2026-02-02 11:33:48.482289559 +0000 UTC m=+0.160424824 container init 7c4f1caf4b95bab21d1c37ac03acacf4297bd7c890caa855be2a0b4efb17f42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kepler, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:33:48 compute-0 podman[256686]: 2026-02-02 11:33:48.488809836 +0000 UTC m=+0.166945091 container start 7c4f1caf4b95bab21d1c37ac03acacf4297bd7c890caa855be2a0b4efb17f42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kepler, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Feb 02 11:33:48 compute-0 podman[256686]: 2026-02-02 11:33:48.492906764 +0000 UTC m=+0.171042029 container attach 7c4f1caf4b95bab21d1c37ac03acacf4297bd7c890caa855be2a0b4efb17f42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb 02 11:33:48 compute-0 nice_kepler[256702]: {
Feb 02 11:33:48 compute-0 nice_kepler[256702]:     "1": [
Feb 02 11:33:48 compute-0 nice_kepler[256702]:         {
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "devices": [
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "/dev/loop3"
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             ],
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "lv_name": "ceph_lv0",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "lv_size": "21470642176",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "name": "ceph_lv0",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "tags": {
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.cluster_name": "ceph",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.crush_device_class": "",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.encrypted": "0",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.osd_id": "1",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.type": "block",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.vdo": "0",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:                 "ceph.with_tpm": "0"
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             },
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "type": "block",
Feb 02 11:33:48 compute-0 nice_kepler[256702]:             "vg_name": "ceph_vg0"
Feb 02 11:33:48 compute-0 nice_kepler[256702]:         }
Feb 02 11:33:48 compute-0 nice_kepler[256702]:     ]
Feb 02 11:33:48 compute-0 nice_kepler[256702]: }
Feb 02 11:33:48 compute-0 systemd[1]: libpod-7c4f1caf4b95bab21d1c37ac03acacf4297bd7c890caa855be2a0b4efb17f42c.scope: Deactivated successfully.
Feb 02 11:33:48 compute-0 podman[256686]: 2026-02-02 11:33:48.802678613 +0000 UTC m=+0.480813878 container died 7c4f1caf4b95bab21d1c37ac03acacf4297bd7c890caa855be2a0b4efb17f42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kepler, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 11:33:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:48.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:48.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-09b61b0708693087cd60f54089cabab0d7ae9bd98dca8b1a40f249f9fa1f7861-merged.mount: Deactivated successfully.
Feb 02 11:33:49 compute-0 podman[256686]: 2026-02-02 11:33:49.106652276 +0000 UTC m=+0.784787541 container remove 7c4f1caf4b95bab21d1c37ac03acacf4297bd7c890caa855be2a0b4efb17f42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kepler, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb 02 11:33:49 compute-0 sudo[256578]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:49 compute-0 systemd[1]: libpod-conmon-7c4f1caf4b95bab21d1c37ac03acacf4297bd7c890caa855be2a0b4efb17f42c.scope: Deactivated successfully.
Feb 02 11:33:49 compute-0 sudo[256725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:33:49 compute-0 sudo[256725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:49 compute-0 sudo[256725]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:49 compute-0 sudo[256750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:33:49 compute-0 sudo[256750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:49 compute-0 ceph-mon[74676]: pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:49 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:49 compute-0 podman[256818]: 2026-02-02 11:33:49.676261282 +0000 UTC m=+0.045059624 container create 646ac74a9a698148d2026f59e4e92085281829849b44b307671c47c08687787e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_benz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb 02 11:33:49 compute-0 systemd[1]: Started libpod-conmon-646ac74a9a698148d2026f59e4e92085281829849b44b307671c47c08687787e.scope.
Feb 02 11:33:49 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:33:49 compute-0 podman[256818]: 2026-02-02 11:33:49.748004641 +0000 UTC m=+0.116803003 container init 646ac74a9a698148d2026f59e4e92085281829849b44b307671c47c08687787e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 11:33:49 compute-0 podman[256818]: 2026-02-02 11:33:49.65668186 +0000 UTC m=+0.025480222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:33:49 compute-0 podman[256818]: 2026-02-02 11:33:49.755847556 +0000 UTC m=+0.124645898 container start 646ac74a9a698148d2026f59e4e92085281829849b44b307671c47c08687787e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_benz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:33:49 compute-0 podman[256818]: 2026-02-02 11:33:49.760429807 +0000 UTC m=+0.129228149 container attach 646ac74a9a698148d2026f59e4e92085281829849b44b307671c47c08687787e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:33:49 compute-0 upbeat_benz[256834]: 167 167
Feb 02 11:33:49 compute-0 systemd[1]: libpod-646ac74a9a698148d2026f59e4e92085281829849b44b307671c47c08687787e.scope: Deactivated successfully.
Feb 02 11:33:49 compute-0 podman[256818]: 2026-02-02 11:33:49.762418214 +0000 UTC m=+0.131216556 container died 646ac74a9a698148d2026f59e4e92085281829849b44b307671c47c08687787e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0789a9620173d4dea63fd48f3cf54b6e2d29e752b9ff161b796017ec556bfa6-merged.mount: Deactivated successfully.
Feb 02 11:33:49 compute-0 podman[256818]: 2026-02-02 11:33:49.798800769 +0000 UTC m=+0.167599111 container remove 646ac74a9a698148d2026f59e4e92085281829849b44b307671c47c08687787e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_benz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:33:49 compute-0 systemd[1]: libpod-conmon-646ac74a9a698148d2026f59e4e92085281829849b44b307671c47c08687787e.scope: Deactivated successfully.
Feb 02 11:33:49 compute-0 podman[256860]: 2026-02-02 11:33:49.935687577 +0000 UTC m=+0.043246652 container create fd08fbbc4b1870951bc2f5aff5974470bf923b8c8c0534942021af7740c88930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb 02 11:33:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:49 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:49 compute-0 systemd[1]: Started libpod-conmon-fd08fbbc4b1870951bc2f5aff5974470bf923b8c8c0534942021af7740c88930.scope.
Feb 02 11:33:49 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/562e951b854180aca37e7324c7b15baba7d7f72b0221b7acdb0fa90057632d2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/562e951b854180aca37e7324c7b15baba7d7f72b0221b7acdb0fa90057632d2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/562e951b854180aca37e7324c7b15baba7d7f72b0221b7acdb0fa90057632d2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/562e951b854180aca37e7324c7b15baba7d7f72b0221b7acdb0fa90057632d2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:33:50 compute-0 podman[256860]: 2026-02-02 11:33:49.918182744 +0000 UTC m=+0.025741839 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:33:50 compute-0 podman[256860]: 2026-02-02 11:33:50.013457608 +0000 UTC m=+0.121016713 container init fd08fbbc4b1870951bc2f5aff5974470bf923b8c8c0534942021af7740c88930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb 02 11:33:50 compute-0 podman[256860]: 2026-02-02 11:33:50.019862482 +0000 UTC m=+0.127421557 container start fd08fbbc4b1870951bc2f5aff5974470bf923b8c8c0534942021af7740c88930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:33:50 compute-0 podman[256860]: 2026-02-02 11:33:50.024105494 +0000 UTC m=+0.131664599 container attach fd08fbbc4b1870951bc2f5aff5974470bf923b8c8c0534942021af7740c88930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_brattain, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:33:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:50 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:50 compute-0 lvm[256950]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:33:50 compute-0 lvm[256950]: VG ceph_vg0 finished
Feb 02 11:33:50 compute-0 eloquent_brattain[256876]: {}
Feb 02 11:33:50 compute-0 systemd[1]: libpod-fd08fbbc4b1870951bc2f5aff5974470bf923b8c8c0534942021af7740c88930.scope: Deactivated successfully.
Feb 02 11:33:50 compute-0 systemd[1]: libpod-fd08fbbc4b1870951bc2f5aff5974470bf923b8c8c0534942021af7740c88930.scope: Consumed 1.076s CPU time.
Feb 02 11:33:50 compute-0 podman[256860]: 2026-02-02 11:33:50.773340743 +0000 UTC m=+0.880899818 container died fd08fbbc4b1870951bc2f5aff5974470bf923b8c8c0534942021af7740c88930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_brattain, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-562e951b854180aca37e7324c7b15baba7d7f72b0221b7acdb0fa90057632d2d-merged.mount: Deactivated successfully.
Feb 02 11:33:50 compute-0 podman[256860]: 2026-02-02 11:33:50.816176772 +0000 UTC m=+0.923735847 container remove fd08fbbc4b1870951bc2f5aff5974470bf923b8c8c0534942021af7740c88930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_brattain, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:33:50 compute-0 systemd[1]: libpod-conmon-fd08fbbc4b1870951bc2f5aff5974470bf923b8c8c0534942021af7740c88930.scope: Deactivated successfully.
Feb 02 11:33:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:50.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:50 compute-0 sudo[256750]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:33:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:50.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:33:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:33:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:33:50 compute-0 sudo[256968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:33:50 compute-0 sudo[256968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:33:50 compute-0 sudo[256968]: pam_unix(sudo:session): session closed for user root
Feb 02 11:33:51 compute-0 ceph-mon[74676]: pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:33:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:33:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:51 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:51 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:52 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:52.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:52.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:53 compute-0 ceph-mon[74676]: pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=404 latency=0.003000086s ======
Feb 02 11:33:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:53.323 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.003000086s
Feb 02 11:33:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:33:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - - [02/Feb/2026:11:33:53.342 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000029s
Feb 02 11:33:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:53 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:53 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:54 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:33:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:54.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:33:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:33:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:54.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:33:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:55 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:55 compute-0 ceph-mon[74676]: pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:55 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:33:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:56 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:33:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:56.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:56.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:56] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:33:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:33:56] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Feb 02 11:33:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:33:57.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:33:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:57 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:57 compute-0 ceph-mon[74676]: pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb 02 11:33:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:57 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:58 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Feb 02 11:33:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Feb 02 11:33:58 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Feb 02 11:33:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:33:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:33:58.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:33:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:33:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:33:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:33:58.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:33:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:59 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:33:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Feb 02 11:33:59 compute-0 ceph-mon[74676]: pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:33:59 compute-0 ceph-mon[74676]: osdmap e141: 3 total, 3 up, 3 in
Feb 02 11:33:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Feb 02 11:33:59 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Feb 02 11:33:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:33:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:33:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:33:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:33:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:33:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:33:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:33:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:33:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:33:59 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Feb 02 11:34:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:00 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Feb 02 11:34:00 compute-0 ceph-mon[74676]: osdmap e142: 3 total, 3 up, 3 in
Feb 02 11:34:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:34:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Feb 02 11:34:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Feb 02 11:34:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:00.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:00.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:01 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:01 compute-0 ceph-mon[74676]: pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Feb 02 11:34:01 compute-0 ceph-mon[74676]: osdmap e143: 3 total, 3 up, 3 in
Feb 02 11:34:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:01 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00580016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v663: 353 pgs: 353 active+clean; 21 MiB data, 166 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Feb 02 11:34:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:02 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Feb 02 11:34:02 compute-0 ceph-mon[74676]: pgmap v663: 353 pgs: 353 active+clean; 21 MiB data, 166 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Feb 02 11:34:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Feb 02 11:34:02 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Feb 02 11:34:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:02.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:02.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:03 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:03 compute-0 ceph-mon[74676]: osdmap e144: 3 total, 3 up, 3 in
Feb 02 11:34:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:03 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v665: 353 pgs: 353 active+clean; 21 MiB data, 166 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.5 MiB/s wr, 32 op/s
Feb 02 11:34:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:04 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:04 compute-0 ceph-mon[74676]: pgmap v665: 353 pgs: 353 active+clean; 21 MiB data, 166 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.5 MiB/s wr, 32 op/s
Feb 02 11:34:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:34:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:04.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:34:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:04.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:05 compute-0 sudo[257007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:34:05 compute-0 sudo[257007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:05 compute-0 sudo[257007]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:05 compute-0 podman[257031]: 2026-02-02 11:34:05.118177495 +0000 UTC m=+0.056961655 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 11:34:05 compute-0 podman[257032]: 2026-02-02 11:34:05.151972345 +0000 UTC m=+0.090438226 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 02 11:34:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:05 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:05 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.0 MiB/s wr, 56 op/s
Feb 02 11:34:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:06 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Feb 02 11:34:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Feb 02 11:34:06 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Feb 02 11:34:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:06.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:06.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:06] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:34:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:06] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Feb 02 11:34:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:34:07.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:34:07 compute-0 ceph-mon[74676]: pgmap v666: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.0 MiB/s wr, 56 op/s
Feb 02 11:34:07 compute-0 ceph-mon[74676]: osdmap e145: 3 total, 3 up, 3 in
Feb 02 11:34:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:07 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:07 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Feb 02 11:34:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:08 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:08.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:09 compute-0 ceph-mon[74676]: pgmap v668: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Feb 02 11:34:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:09 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:09 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0060001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v669: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Feb 02 11:34:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:10 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:10.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:11 compute-0 ceph-mon[74676]: pgmap v669: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Feb 02 11:34:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:11 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:11 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Feb 02 11:34:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:12 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0060001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:34:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:12.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:34:13 compute-0 ceph-mon[74676]: pgmap v670: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Feb 02 11:34:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:13 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:13 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v671: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Feb 02 11:34:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:14 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:34:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:34:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:14.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:14.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:15 compute-0 ceph-mon[74676]: pgmap v671: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Feb 02 11:34:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:34:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:15 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0060001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:15 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Feb 02 11:34:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:16 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113416 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:34:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:16.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:16.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:16] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Feb 02 11:34:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:16] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Feb 02 11:34:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:34:17.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:34:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:17 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:17 compute-0 ceph-mon[74676]: pgmap v672: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Feb 02 11:34:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:17 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0060002bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 175 B/s rd, 0 op/s
Feb 02 11:34:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:18 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:18.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:18.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:19 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:19 compute-0 ceph-mon[74676]: pgmap v673: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 175 B/s rd, 0 op/s
Feb 02 11:34:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:19 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:34:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:20 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0060002bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:20.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:21 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:21 compute-0 ceph-mon[74676]: pgmap v674: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:34:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:21 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:34:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:22 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:22.668 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:22.669 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:22.669 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:22.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:22.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:23 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00600038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:23 compute-0 ceph-mon[74676]: pgmap v675: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:34:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:23 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:34:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:24 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:24 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:34:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:34:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:34:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:24.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:25 compute-0 nova_compute[251290]: 2026-02-02 11:34:25.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:25 compute-0 nova_compute[251290]: 2026-02-02 11:34:25.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 11:34:25 compute-0 nova_compute[251290]: 2026-02-02 11:34:25.037 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 11:34:25 compute-0 nova_compute[251290]: 2026-02-02 11:34:25.038 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:25 compute-0 nova_compute[251290]: 2026-02-02 11:34:25.038 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 11:34:25 compute-0 nova_compute[251290]: 2026-02-02 11:34:25.052 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:25 compute-0 sudo[257101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:34:25 compute-0 sudo[257101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:25 compute-0 sudo[257101]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:25 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:25 compute-0 ceph-mon[74676]: pgmap v676: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:34:25 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1475600816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:34:25 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:25.960 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:34:25 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:25.961 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:34:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:25 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00600038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Feb 02 11:34:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:26 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:26 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2328024394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:34:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:26.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:26.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:26] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Feb 02 11:34:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:26] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Feb 02 11:34:27 compute-0 nova_compute[251290]: 2026-02-02 11:34:27.067 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:34:27.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:34:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:27 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:27 compute-0 ceph-mon[74676]: pgmap v677: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Feb 02 11:34:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:27 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:34:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:27 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:34:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:27 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:28 compute-0 nova_compute[251290]: 2026-02-02 11:34:28.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:28 compute-0 nova_compute[251290]: 2026-02-02 11:34:28.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Feb 02 11:34:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:28 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00600038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:28 compute-0 ceph-mon[74676]: pgmap v678: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Feb 02 11:34:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:28.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:28.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:29 compute-0 nova_compute[251290]: 2026-02-02 11:34:29.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:29 compute-0 nova_compute[251290]: 2026-02-02 11:34:29.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:29 compute-0 nova_compute[251290]: 2026-02-02 11:34:29.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:34:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:29 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/576003778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:34:29
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', '.nfs', 'images', 'default.rgw.control', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:34:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:34:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:34:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:34:29 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:29.964 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:34:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:29 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.041 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.042 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.042 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.042 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.042 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Feb 02 11:34:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:30 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:34:30 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2092953534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:34:30 compute-0 ceph-mon[74676]: pgmap v679: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Feb 02 11:34:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:30 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.530 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.769 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.770 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4932MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.770 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.771 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.868 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.868 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:34:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:30.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:30.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:30 compute-0 nova_compute[251290]: 2026-02-02 11:34:30.954 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing inventories for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 11:34:31 compute-0 nova_compute[251290]: 2026-02-02 11:34:31.025 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating ProviderTree inventory for provider 92919e7b-7846-4645-9401-9fd55bbbf435 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 11:34:31 compute-0 nova_compute[251290]: 2026-02-02 11:34:31.025 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:34:31 compute-0 nova_compute[251290]: 2026-02-02 11:34:31.054 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing aggregate associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 11:34:31 compute-0 nova_compute[251290]: 2026-02-02 11:34:31.099 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing trait associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 11:34:31 compute-0 nova_compute[251290]: 2026-02-02 11:34:31.118 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:31 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00600038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3475098526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:34:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:34:31 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431218691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:34:31 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Check health
Feb 02 11:34:31 compute-0 nova_compute[251290]: 2026-02-02 11:34:31.594 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:31 compute-0 nova_compute[251290]: 2026-02-02 11:34:31.601 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:34:31 compute-0 nova_compute[251290]: 2026-02-02 11:34:31.621 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:34:31 compute-0 nova_compute[251290]: 2026-02-02 11:34:31.623 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:34:31 compute-0 nova_compute[251290]: 2026-02-02 11:34:31.623 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:31 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:34:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:32 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3431218691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:34:32 compute-0 ceph-mon[74676]: pgmap v680: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:34:32 compute-0 nova_compute[251290]: 2026-02-02 11:34:32.625 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:32 compute-0 nova_compute[251290]: 2026-02-02 11:34:32.625 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:32.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:32.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:33 compute-0 nova_compute[251290]: 2026-02-02 11:34:33.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:34:33 compute-0 nova_compute[251290]: 2026-02-02 11:34:33.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:34:33 compute-0 nova_compute[251290]: 2026-02-02 11:34:33.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:34:33 compute-0 nova_compute[251290]: 2026-02-02 11:34:33.036 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:34:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:33 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:33 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00600038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:34:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:34 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00600038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:34 compute-0 ceph-mon[74676]: pgmap v681: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:34:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:34.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:34.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:35 compute-0 podman[257180]: 2026-02-02 11:34:35.264154196 +0000 UTC m=+0.055200425 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 02 11:34:35 compute-0 podman[257181]: 2026-02-02 11:34:35.32734652 +0000 UTC m=+0.113264602 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 02 11:34:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:34:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:36 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00600038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113436 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:34:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:36.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:36.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:36] "GET /metrics HTTP/1.1" 200 48321 "" "Prometheus/2.51.0"
Feb 02 11:34:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:36] "GET /metrics HTTP/1.1" 200 48321 "" "Prometheus/2.51.0"
Feb 02 11:34:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:34:37.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:34:37 compute-0 ceph-mon[74676]: pgmap v682: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb 02 11:34:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:37 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00600038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:37 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:34:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:38 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:38.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:38.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:39 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:39 compute-0 ceph-mon[74676]: pgmap v683: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:34:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:39 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:34:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:40 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:40.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:41 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:41 compute-0 ceph-mon[74676]: pgmap v684: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:34:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:41 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v685: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:34:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:42 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:42 compute-0 ceph-mon[74676]: pgmap v685: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb 02 11:34:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:42.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:42.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:43 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:34:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2979190360' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:34:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:34:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2979190360' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:34:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:43 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2979190360' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:34:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2979190360' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:34:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:34:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:44 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:34:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:34:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:44.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:44.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:45 compute-0 ceph-mon[74676]: pgmap v686: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:34:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:34:45 compute-0 sudo[257237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:34:45 compute-0 sudo[257237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:45 compute-0 sudo[257237]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:45 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:45 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v687: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:34:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:46 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:46 compute-0 nova_compute[251290]: 2026-02-02 11:34:46.864 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "773f45b2-ee63-471e-8884-36748ebdf289" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:46 compute-0 nova_compute[251290]: 2026-02-02 11:34:46.864 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:46.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:46 compute-0 nova_compute[251290]: 2026-02-02 11:34:46.921 251294 DEBUG nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 11:34:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:46.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:46] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Feb 02 11:34:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:46] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.058 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.059 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.067 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.068 251294 INFO nova.compute.claims [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Claim successful on node compute-0.ctlplane.example.com
Feb 02 11:34:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:34:47.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.181 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:47 compute-0 ceph-mon[74676]: pgmap v687: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb 02 11:34:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:47 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:34:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/219750224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.637 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.643 251294 DEBUG nova.compute.provider_tree [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.659 251294 DEBUG nova.scheduler.client.report [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.680 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.681 251294 DEBUG nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.725 251294 DEBUG nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.726 251294 DEBUG nova.network.neutron [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.754 251294 INFO nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.775 251294 DEBUG nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.877 251294 DEBUG nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.881 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.881 251294 INFO nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Creating image(s)
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.913 251294 DEBUG nova.storage.rbd_utils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 773f45b2-ee63-471e-8884-36748ebdf289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.946 251294 DEBUG nova.storage.rbd_utils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 773f45b2-ee63-471e-8884-36748ebdf289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.976 251294 DEBUG nova.storage.rbd_utils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 773f45b2-ee63-471e-8884-36748ebdf289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.982 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:47 compute-0 nova_compute[251290]: 2026-02-02 11:34:47.983 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=cleanup t=2026-02-02T11:34:47.999610027Z level=info msg="Completed cleanup jobs" duration=27.795498ms
Feb 02 11:34:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:47 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=plugins.update.checker t=2026-02-02T11:34:48.084137223Z level=info msg="Update check succeeded" duration=45.694821ms
Feb 02 11:34:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=grafana.update.checker t=2026-02-02T11:34:48.091827593Z level=info msg="Update check succeeded" duration=56.478061ms
Feb 02 11:34:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v688: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:34:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:48 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/219750224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:34:48 compute-0 nova_compute[251290]: 2026-02-02 11:34:48.659 251294 WARNING oslo_policy.policy [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 02 11:34:48 compute-0 nova_compute[251290]: 2026-02-02 11:34:48.660 251294 WARNING oslo_policy.policy [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 02 11:34:48 compute-0 nova_compute[251290]: 2026-02-02 11:34:48.663 251294 DEBUG nova.policy [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abee87546a344ef285e2e269d2c74792', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3240aa599bd249a3b72e42fcc47af557', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 11:34:48 compute-0 nova_compute[251290]: 2026-02-02 11:34:48.777 251294 DEBUG nova.virt.libvirt.imagebackend [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image locations are: [{'url': 'rbd://1d33f80b-d6ca-501c-bac7-184379b89279/images/8a4b36bd-584f-4a0a-aab3-55c0b12d2d97/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://1d33f80b-d6ca-501c-bac7-184379b89279/images/8a4b36bd-584f-4a0a-aab3-55c0b12d2d97/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 02 11:34:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:34:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:48.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:34:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:34:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:48.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:34:49 compute-0 ceph-mon[74676]: pgmap v688: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:34:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:49 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:49 compute-0 nova_compute[251290]: 2026-02-02 11:34:49.648 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:49 compute-0 nova_compute[251290]: 2026-02-02 11:34:49.712 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:49 compute-0 nova_compute[251290]: 2026-02-02 11:34:49.713 251294 DEBUG nova.virt.images [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] 8a4b36bd-584f-4a0a-aab3-55c0b12d2d97 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 02 11:34:49 compute-0 nova_compute[251290]: 2026-02-02 11:34:49.714 251294 DEBUG nova.privsep.utils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 02 11:34:49 compute-0 nova_compute[251290]: 2026-02-02 11:34:49.715 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297.part /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:49 compute-0 nova_compute[251290]: 2026-02-02 11:34:49.875 251294 DEBUG nova.network.neutron [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Successfully created port: 79942433-cf13-432a-ae35-76cf688e4dec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 11:34:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:50 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:50 compute-0 nova_compute[251290]: 2026-02-02 11:34:50.077 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297.part /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297.converted" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:50 compute-0 nova_compute[251290]: 2026-02-02 11:34:50.080 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:50 compute-0 nova_compute[251290]: 2026-02-02 11:34:50.140 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:50 compute-0 nova_compute[251290]: 2026-02-02 11:34:50.141 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:50 compute-0 nova_compute[251290]: 2026-02-02 11:34:50.171 251294 DEBUG nova.storage.rbd_utils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 773f45b2-ee63-471e-8884-36748ebdf289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:34:50 compute-0 nova_compute[251290]: 2026-02-02 11:34:50.177 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 773f45b2-ee63-471e-8884-36748ebdf289_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:34:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:50 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Feb 02 11:34:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Feb 02 11:34:50 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Feb 02 11:34:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:50.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:50.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.121 251294 DEBUG nova.network.neutron [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Successfully updated port: 79942433-cf13-432a-ae35-76cf688e4dec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.137 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.137 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquired lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.137 251294 DEBUG nova.network.neutron [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 11:34:51 compute-0 sudo[257396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:34:51 compute-0 sudo[257396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:51 compute-0 sudo[257396]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:51 compute-0 sudo[257421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:34:51 compute-0 sudo[257421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.312 251294 DEBUG nova.network.neutron [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 11:34:51 compute-0 ceph-mon[74676]: pgmap v689: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb 02 11:34:51 compute-0 ceph-mon[74676]: osdmap e146: 3 total, 3 up, 3 in
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Feb 02 11:34:51 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Feb 02 11:34:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:51 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.605 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 773f45b2-ee63-471e-8884-36748ebdf289_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:51 compute-0 sudo[257421]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.693 251294 DEBUG nova.compute.manager [req-8a26b9c1-ad82-4b8f-b980-71dbbb509455 req-e857d3e9-787f-4419-aeaa-9ca055214b94 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received event network-changed-79942433-cf13-432a-ae35-76cf688e4dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.693 251294 DEBUG nova.compute.manager [req-8a26b9c1-ad82-4b8f-b980-71dbbb509455 req-e857d3e9-787f-4419-aeaa-9ca055214b94 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Refreshing instance network info cache due to event network-changed-79942433-cf13-432a-ae35-76cf688e4dec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.693 251294 DEBUG oslo_concurrency.lockutils [req-8a26b9c1-ad82-4b8f-b980-71dbbb509455 req-e857d3e9-787f-4419-aeaa-9ca055214b94 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.699 251294 DEBUG nova.storage.rbd_utils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] resizing rbd image 773f45b2-ee63-471e-8884-36748ebdf289_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 11:34:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb 02 11:34:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:34:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:34:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:34:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:34:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:34:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:34:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:34:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:34:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:34:51 compute-0 sudo[257534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:34:51 compute-0 sudo[257534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:51 compute-0 sudo[257534]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.829 251294 DEBUG nova.objects.instance [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'migration_context' on Instance uuid 773f45b2-ee63-471e-8884-36748ebdf289 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.846 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.846 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Ensure instance console log exists: /var/lib/nova/instances/773f45b2-ee63-471e-8884-36748ebdf289/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.846 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.847 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:51 compute-0 nova_compute[251290]: 2026-02-02 11:34:51.847 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:51 compute-0 sudo[257577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:34:51 compute-0 sudo[257577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:52 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v692: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Feb 02 11:34:52 compute-0 podman[257642]: 2026-02-02 11:34:52.257210934 +0000 UTC m=+0.035560961 container create 1688f7da8ec36206a63549409915cddc5f17ae42a8d803287b91aaf5ba50c3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:34:52 compute-0 systemd[1]: Started libpod-conmon-1688f7da8ec36206a63549409915cddc5f17ae42a8d803287b91aaf5ba50c3b8.scope.
Feb 02 11:34:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:52 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:34:52 compute-0 podman[257642]: 2026-02-02 11:34:52.241246226 +0000 UTC m=+0.019596283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:34:52 compute-0 podman[257642]: 2026-02-02 11:34:52.34102972 +0000 UTC m=+0.119379777 container init 1688f7da8ec36206a63549409915cddc5f17ae42a8d803287b91aaf5ba50c3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:34:52 compute-0 podman[257642]: 2026-02-02 11:34:52.347847965 +0000 UTC m=+0.126197992 container start 1688f7da8ec36206a63549409915cddc5f17ae42a8d803287b91aaf5ba50c3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:34:52 compute-0 systemd[1]: libpod-1688f7da8ec36206a63549409915cddc5f17ae42a8d803287b91aaf5ba50c3b8.scope: Deactivated successfully.
Feb 02 11:34:52 compute-0 sleepy_wing[257658]: 167 167
Feb 02 11:34:52 compute-0 conmon[257658]: conmon 1688f7da8ec36206a635 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1688f7da8ec36206a63549409915cddc5f17ae42a8d803287b91aaf5ba50c3b8.scope/container/memory.events
Feb 02 11:34:52 compute-0 podman[257642]: 2026-02-02 11:34:52.355763663 +0000 UTC m=+0.134113690 container attach 1688f7da8ec36206a63549409915cddc5f17ae42a8d803287b91aaf5ba50c3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:34:52 compute-0 podman[257642]: 2026-02-02 11:34:52.357013188 +0000 UTC m=+0.135363215 container died 1688f7da8ec36206a63549409915cddc5f17ae42a8d803287b91aaf5ba50c3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:34:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd15c4539b66fa46dac36f0804fc77ed7bced8aded2abf945e2e8b9f752650be-merged.mount: Deactivated successfully.
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.383 251294 DEBUG nova.network.neutron [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updating instance_info_cache with network_info: [{"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:34:52 compute-0 ceph-mon[74676]: osdmap e147: 3 total, 3 up, 3 in
Feb 02 11:34:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:34:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:34:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:34:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:34:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:34:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:34:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:34:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:34:52 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:34:52 compute-0 podman[257642]: 2026-02-02 11:34:52.409716261 +0000 UTC m=+0.188066288 container remove 1688f7da8ec36206a63549409915cddc5f17ae42a8d803287b91aaf5ba50c3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.413 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Releasing lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.413 251294 DEBUG nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Instance network_info: |[{"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.414 251294 DEBUG oslo_concurrency.lockutils [req-8a26b9c1-ad82-4b8f-b980-71dbbb509455 req-e857d3e9-787f-4419-aeaa-9ca055214b94 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.415 251294 DEBUG nova.network.neutron [req-8a26b9c1-ad82-4b8f-b980-71dbbb509455 req-e857d3e9-787f-4419-aeaa-9ca055214b94 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Refreshing network info cache for port 79942433-cf13-432a-ae35-76cf688e4dec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.418 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Start _get_guest_xml network_info=[{"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:33:57Z,direct_url=<?>,disk_format='qcow2',id=8a4b36bd-584f-4a0a-aab3-55c0b12d2d97,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='298a2ae7f4e04d87bebf3a1c7834ef26',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:34:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '8a4b36bd-584f-4a0a-aab3-55c0b12d2d97'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 11:34:52 compute-0 systemd[1]: libpod-conmon-1688f7da8ec36206a63549409915cddc5f17ae42a8d803287b91aaf5ba50c3b8.scope: Deactivated successfully.
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.426 251294 WARNING nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.438 251294 DEBUG nova.virt.libvirt.host [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.440 251294 DEBUG nova.virt.libvirt.host [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.444 251294 DEBUG nova.virt.libvirt.host [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.444 251294 DEBUG nova.virt.libvirt.host [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.445 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.445 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:33:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5413fce8-24ad-46a1-a21e-000a8299c8f6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:33:57Z,direct_url=<?>,disk_format='qcow2',id=8a4b36bd-584f-4a0a-aab3-55c0b12d2d97,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='298a2ae7f4e04d87bebf3a1c7834ef26',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:34:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.446 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.446 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.446 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.447 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.447 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.447 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.447 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.448 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.448 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.448 251294 DEBUG nova.virt.hardware [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.454 251294 DEBUG nova.privsep.utils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.455 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:52 compute-0 podman[257681]: 2026-02-02 11:34:52.55499278 +0000 UTC m=+0.044919150 container create ca864505089277ed2922a69c10aca236b970633a6aed77f1cbaab357a55ebbe9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_spence, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:34:52 compute-0 systemd[1]: Started libpod-conmon-ca864505089277ed2922a69c10aca236b970633a6aed77f1cbaab357a55ebbe9.scope.
Feb 02 11:34:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:34:52 compute-0 podman[257681]: 2026-02-02 11:34:52.535933383 +0000 UTC m=+0.025859773 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb8d782bf0d63c590922de9d443d97aca8c03b3643f06b09987f0873a76e103/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb8d782bf0d63c590922de9d443d97aca8c03b3643f06b09987f0873a76e103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb8d782bf0d63c590922de9d443d97aca8c03b3643f06b09987f0873a76e103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb8d782bf0d63c590922de9d443d97aca8c03b3643f06b09987f0873a76e103/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb8d782bf0d63c590922de9d443d97aca8c03b3643f06b09987f0873a76e103/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:52 compute-0 podman[257681]: 2026-02-02 11:34:52.66161907 +0000 UTC m=+0.151545460 container init ca864505089277ed2922a69c10aca236b970633a6aed77f1cbaab357a55ebbe9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_spence, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:34:52 compute-0 podman[257681]: 2026-02-02 11:34:52.668506947 +0000 UTC m=+0.158433317 container start ca864505089277ed2922a69c10aca236b970633a6aed77f1cbaab357a55ebbe9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_spence, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:34:52 compute-0 podman[257681]: 2026-02-02 11:34:52.672886123 +0000 UTC m=+0.162812683 container attach ca864505089277ed2922a69c10aca236b970633a6aed77f1cbaab357a55ebbe9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_spence, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 11:34:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:52.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 11:34:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/304548607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.925 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:52 compute-0 inspiring_spence[257716]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:34:52 compute-0 inspiring_spence[257716]: --> All data devices are unavailable
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.955 251294 DEBUG nova.storage.rbd_utils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 773f45b2-ee63-471e-8884-36748ebdf289_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:34:52 compute-0 nova_compute[251290]: 2026-02-02 11:34:52.959 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:52.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:52 compute-0 systemd[1]: libpod-ca864505089277ed2922a69c10aca236b970633a6aed77f1cbaab357a55ebbe9.scope: Deactivated successfully.
Feb 02 11:34:52 compute-0 podman[257681]: 2026-02-02 11:34:52.987012447 +0000 UTC m=+0.476938827 container died ca864505089277ed2922a69c10aca236b970633a6aed77f1cbaab357a55ebbe9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_spence, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 11:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fb8d782bf0d63c590922de9d443d97aca8c03b3643f06b09987f0873a76e103-merged.mount: Deactivated successfully.
Feb 02 11:34:53 compute-0 podman[257681]: 2026-02-02 11:34:53.031994528 +0000 UTC m=+0.521920898 container remove ca864505089277ed2922a69c10aca236b970633a6aed77f1cbaab357a55ebbe9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:34:53 compute-0 systemd[1]: libpod-conmon-ca864505089277ed2922a69c10aca236b970633a6aed77f1cbaab357a55ebbe9.scope: Deactivated successfully.
Feb 02 11:34:53 compute-0 sudo[257577]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:53 compute-0 sudo[257775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:34:53 compute-0 sudo[257775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:53 compute-0 sudo[257775]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:53 compute-0 sudo[257809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:34:53 compute-0 sudo[257809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:53 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:53 compute-0 ceph-mon[74676]: pgmap v692: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Feb 02 11:34:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/304548607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:34:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 11:34:53 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1913756648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.459 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.462 251294 DEBUG nova.virt.libvirt.vif [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:34:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1706655855',display_name='tempest-TestNetworkBasicOps-server-1706655855',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1706655855',id=1,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKJILa1pJ0lMUnAszlbq6jqwpvMTgRVY/oBl6jvDH3JNgZy2n8zWBmgqZ4xT99avbs9P9LPHO41NiPnKt5YdAc9UIfIeWurJQe6O/sIxYJXCVhMCy77X9aVAhJ9Pdy3RPA==',key_name='tempest-TestNetworkBasicOps-866424295',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-j6d8awkp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:34:47Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=773f45b2-ee63-471e-8884-36748ebdf289,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.462 251294 DEBUG nova.network.os_vif_util [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.463 251294 DEBUG nova.network.os_vif_util [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:87:ce,bridge_name='br-int',has_traffic_filtering=True,id=79942433-cf13-432a-ae35-76cf688e4dec,network=Network(35ed898d-7ce9-4d75-8e38-edd8fff50f91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79942433-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.466 251294 DEBUG nova.objects.instance [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'pci_devices' on Instance uuid 773f45b2-ee63-471e-8884-36748ebdf289 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.468 251294 DEBUG nova.network.neutron [req-8a26b9c1-ad82-4b8f-b980-71dbbb509455 req-e857d3e9-787f-4419-aeaa-9ca055214b94 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updated VIF entry in instance network info cache for port 79942433-cf13-432a-ae35-76cf688e4dec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.469 251294 DEBUG nova.network.neutron [req-8a26b9c1-ad82-4b8f-b980-71dbbb509455 req-e857d3e9-787f-4419-aeaa-9ca055214b94 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updating instance_info_cache with network_info: [{"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.495 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] End _get_guest_xml xml=<domain type="kvm">
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <uuid>773f45b2-ee63-471e-8884-36748ebdf289</uuid>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <name>instance-00000001</name>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <memory>131072</memory>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <vcpu>1</vcpu>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <nova:name>tempest-TestNetworkBasicOps-server-1706655855</nova:name>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <nova:creationTime>2026-02-02 11:34:52</nova:creationTime>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <nova:flavor name="m1.nano">
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <nova:memory>128</nova:memory>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <nova:disk>1</nova:disk>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <nova:swap>0</nova:swap>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <nova:vcpus>1</nova:vcpus>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       </nova:flavor>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <nova:owner>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       </nova:owner>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <nova:ports>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <nova:port uuid="79942433-cf13-432a-ae35-76cf688e4dec">
Feb 02 11:34:53 compute-0 nova_compute[251290]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         </nova:port>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       </nova:ports>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     </nova:instance>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <sysinfo type="smbios">
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <system>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <entry name="manufacturer">RDO</entry>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <entry name="product">OpenStack Compute</entry>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <entry name="serial">773f45b2-ee63-471e-8884-36748ebdf289</entry>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <entry name="uuid">773f45b2-ee63-471e-8884-36748ebdf289</entry>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <entry name="family">Virtual Machine</entry>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     </system>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <os>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <boot dev="hd"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <smbios mode="sysinfo"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   </os>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <features>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <vmcoreinfo/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   </features>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <clock offset="utc">
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <timer name="hpet" present="no"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <cpu mode="host-model" match="exact">
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <disk type="network" device="disk">
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <driver type="raw" cache="none"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <source protocol="rbd" name="vms/773f45b2-ee63-471e-8884-36748ebdf289_disk">
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <host name="192.168.122.100" port="6789"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <host name="192.168.122.102" port="6789"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <host name="192.168.122.101" port="6789"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       </source>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <auth username="openstack">
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <secret type="ceph" uuid="1d33f80b-d6ca-501c-bac7-184379b89279"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <target dev="vda" bus="virtio"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <disk type="network" device="cdrom">
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <driver type="raw" cache="none"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <source protocol="rbd" name="vms/773f45b2-ee63-471e-8884-36748ebdf289_disk.config">
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <host name="192.168.122.100" port="6789"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <host name="192.168.122.102" port="6789"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <host name="192.168.122.101" port="6789"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       </source>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <auth username="openstack">
Feb 02 11:34:53 compute-0 nova_compute[251290]:         <secret type="ceph" uuid="1d33f80b-d6ca-501c-bac7-184379b89279"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <target dev="sda" bus="sata"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <interface type="ethernet">
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <mac address="fa:16:3e:5a:87:ce"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <model type="virtio"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <mtu size="1442"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <target dev="tap79942433-cf"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <serial type="pty">
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <log file="/var/lib/nova/instances/773f45b2-ee63-471e-8884-36748ebdf289/console.log" append="off"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <video>
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <model type="virtio"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     </video>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <input type="tablet" bus="usb"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <rng model="virtio">
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <backend model="random">/dev/urandom</backend>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <controller type="usb" index="0"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     <memballoon model="virtio">
Feb 02 11:34:53 compute-0 nova_compute[251290]:       <stats period="10"/>
Feb 02 11:34:53 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:34:53 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:34:53 compute-0 nova_compute[251290]: </domain>
Feb 02 11:34:53 compute-0 nova_compute[251290]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.498 251294 DEBUG nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Preparing to wait for external event network-vif-plugged-79942433-cf13-432a-ae35-76cf688e4dec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.498 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "773f45b2-ee63-471e-8884-36748ebdf289-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.498 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.499 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.499 251294 DEBUG nova.virt.libvirt.vif [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:34:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1706655855',display_name='tempest-TestNetworkBasicOps-server-1706655855',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1706655855',id=1,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKJILa1pJ0lMUnAszlbq6jqwpvMTgRVY/oBl6jvDH3JNgZy2n8zWBmgqZ4xT99avbs9P9LPHO41NiPnKt5YdAc9UIfIeWurJQe6O/sIxYJXCVhMCy77X9aVAhJ9Pdy3RPA==',key_name='tempest-TestNetworkBasicOps-866424295',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-j6d8awkp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:34:47Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=773f45b2-ee63-471e-8884-36748ebdf289,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.500 251294 DEBUG nova.network.os_vif_util [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.500 251294 DEBUG nova.network.os_vif_util [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:87:ce,bridge_name='br-int',has_traffic_filtering=True,id=79942433-cf13-432a-ae35-76cf688e4dec,network=Network(35ed898d-7ce9-4d75-8e38-edd8fff50f91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79942433-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.501 251294 DEBUG os_vif [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:87:ce,bridge_name='br-int',has_traffic_filtering=True,id=79942433-cf13-432a-ae35-76cf688e4dec,network=Network(35ed898d-7ce9-4d75-8e38-edd8fff50f91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79942433-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 11:34:53 compute-0 podman[257878]: 2026-02-02 11:34:53.574830465 +0000 UTC m=+0.043127098 container create c2af3ed0255c60609c154fb4025d33c00b051a5b26571702bb1f034239630025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_stonebraker, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.575 251294 DEBUG ovsdbapp.backend.ovs_idl [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.575 251294 DEBUG ovsdbapp.backend.ovs_idl [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.576 251294 DEBUG ovsdbapp.backend.ovs_idl [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.576 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.577 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.578 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.579 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.580 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.583 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.595 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.596 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.596 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.597 251294 INFO oslo.privsep.daemon [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpudkqqa50/privsep.sock']
Feb 02 11:34:53 compute-0 systemd[1]: Started libpod-conmon-c2af3ed0255c60609c154fb4025d33c00b051a5b26571702bb1f034239630025.scope.
Feb 02 11:34:53 compute-0 nova_compute[251290]: 2026-02-02 11:34:53.636 251294 DEBUG oslo_concurrency.lockutils [req-8a26b9c1-ad82-4b8f-b980-71dbbb509455 req-e857d3e9-787f-4419-aeaa-9ca055214b94 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:34:53 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:34:53 compute-0 podman[257878]: 2026-02-02 11:34:53.558335172 +0000 UTC m=+0.026631825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:34:53 compute-0 podman[257878]: 2026-02-02 11:34:53.65793023 +0000 UTC m=+0.126226863 container init c2af3ed0255c60609c154fb4025d33c00b051a5b26571702bb1f034239630025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_stonebraker, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:34:53 compute-0 podman[257878]: 2026-02-02 11:34:53.665164028 +0000 UTC m=+0.133460661 container start c2af3ed0255c60609c154fb4025d33c00b051a5b26571702bb1f034239630025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:34:53 compute-0 podman[257878]: 2026-02-02 11:34:53.668794322 +0000 UTC m=+0.137090955 container attach c2af3ed0255c60609c154fb4025d33c00b051a5b26571702bb1f034239630025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_stonebraker, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 11:34:53 compute-0 happy_stonebraker[257896]: 167 167
Feb 02 11:34:53 compute-0 systemd[1]: libpod-c2af3ed0255c60609c154fb4025d33c00b051a5b26571702bb1f034239630025.scope: Deactivated successfully.
Feb 02 11:34:53 compute-0 podman[257878]: 2026-02-02 11:34:53.67081628 +0000 UTC m=+0.139112913 container died c2af3ed0255c60609c154fb4025d33c00b051a5b26571702bb1f034239630025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5a6f20b0d48d094f353d3b9fefb6b0e62eab81569784595f2593b804e3e5e77-merged.mount: Deactivated successfully.
Feb 02 11:34:53 compute-0 podman[257878]: 2026-02-02 11:34:53.707110921 +0000 UTC m=+0.175407564 container remove c2af3ed0255c60609c154fb4025d33c00b051a5b26571702bb1f034239630025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_stonebraker, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:34:53 compute-0 systemd[1]: libpod-conmon-c2af3ed0255c60609c154fb4025d33c00b051a5b26571702bb1f034239630025.scope: Deactivated successfully.
Feb 02 11:34:53 compute-0 podman[257921]: 2026-02-02 11:34:53.849265291 +0000 UTC m=+0.041204034 container create 1bfc14d5ae3b4e85aeaa2c646b2c5c6dbb3ead20cef15a9a6b3b9d7852366b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:34:53 compute-0 systemd[1]: Started libpod-conmon-1bfc14d5ae3b4e85aeaa2c646b2c5c6dbb3ead20cef15a9a6b3b9d7852366b71.scope.
Feb 02 11:34:53 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da870bf632237312e0d557cd8b1dc0ad2bbc728334336b60f178080902caa77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da870bf632237312e0d557cd8b1dc0ad2bbc728334336b60f178080902caa77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da870bf632237312e0d557cd8b1dc0ad2bbc728334336b60f178080902caa77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da870bf632237312e0d557cd8b1dc0ad2bbc728334336b60f178080902caa77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:53 compute-0 podman[257921]: 2026-02-02 11:34:53.828499085 +0000 UTC m=+0.020437828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:34:53 compute-0 podman[257921]: 2026-02-02 11:34:53.941073735 +0000 UTC m=+0.133012508 container init 1bfc14d5ae3b4e85aeaa2c646b2c5c6dbb3ead20cef15a9a6b3b9d7852366b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:34:53 compute-0 podman[257921]: 2026-02-02 11:34:53.947053187 +0000 UTC m=+0.138991960 container start 1bfc14d5ae3b4e85aeaa2c646b2c5c6dbb3ead20cef15a9a6b3b9d7852366b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclean, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:34:53 compute-0 podman[257921]: 2026-02-02 11:34:53.950960599 +0000 UTC m=+0.142899342 container attach 1bfc14d5ae3b4e85aeaa2c646b2c5c6dbb3ead20cef15a9a6b3b9d7852366b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclean, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:34:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:54 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:54 compute-0 interesting_mclean[257938]: {
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:     "1": [
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:         {
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "devices": [
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "/dev/loop3"
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             ],
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "lv_name": "ceph_lv0",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "lv_size": "21470642176",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "name": "ceph_lv0",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "tags": {
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.cluster_name": "ceph",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.crush_device_class": "",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.encrypted": "0",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.osd_id": "1",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.type": "block",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.vdo": "0",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:                 "ceph.with_tpm": "0"
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             },
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "type": "block",
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:             "vg_name": "ceph_vg0"
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:         }
Feb 02 11:34:54 compute-0 interesting_mclean[257938]:     ]
Feb 02 11:34:54 compute-0 interesting_mclean[257938]: }
Feb 02 11:34:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v693: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Feb 02 11:34:54 compute-0 systemd[1]: libpod-1bfc14d5ae3b4e85aeaa2c646b2c5c6dbb3ead20cef15a9a6b3b9d7852366b71.scope: Deactivated successfully.
Feb 02 11:34:54 compute-0 podman[257921]: 2026-02-02 11:34:54.256886618 +0000 UTC m=+0.448825381 container died 1bfc14d5ae3b4e85aeaa2c646b2c5c6dbb3ead20cef15a9a6b3b9d7852366b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:34:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8da870bf632237312e0d557cd8b1dc0ad2bbc728334336b60f178080902caa77-merged.mount: Deactivated successfully.
Feb 02 11:34:54 compute-0 podman[257921]: 2026-02-02 11:34:54.297188464 +0000 UTC m=+0.489127207 container remove 1bfc14d5ae3b4e85aeaa2c646b2c5c6dbb3ead20cef15a9a6b3b9d7852366b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclean, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:34:54 compute-0 systemd[1]: libpod-conmon-1bfc14d5ae3b4e85aeaa2c646b2c5c6dbb3ead20cef15a9a6b3b9d7852366b71.scope: Deactivated successfully.
Feb 02 11:34:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:54 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:54 compute-0 sudo[257809]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.377 251294 INFO oslo.privsep.daemon [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Spawned new privsep daemon via rootwrap
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.205 257947 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.209 257947 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.210 257947 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.211 257947 INFO oslo.privsep.daemon [-] privsep daemon running as pid 257947
Feb 02 11:34:54 compute-0 sudo[257960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:34:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1913756648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:34:54 compute-0 sudo[257960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:54 compute-0 sudo[257960]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.422358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032094422402, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2149, "num_deletes": 251, "total_data_size": 4229814, "memory_usage": 4295488, "flush_reason": "Manual Compaction"}
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032094461153, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4131504, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19974, "largest_seqno": 22122, "table_properties": {"data_size": 4121758, "index_size": 6176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20225, "raw_average_key_size": 20, "raw_value_size": 4102178, "raw_average_value_size": 4126, "num_data_blocks": 270, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031887, "oldest_key_time": 1770031887, "file_creation_time": 1770032094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 38858 microseconds, and 6034 cpu microseconds.
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.461211) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4131504 bytes OK
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.461239) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.462835) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.462876) EVENT_LOG_v1 {"time_micros": 1770032094462866, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.462901) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4220880, prev total WAL file size 4220880, number of live WAL files 2.
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.463712) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(4034KB)], [44(12MB)]
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032094463801, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17065955, "oldest_snapshot_seqno": -1}
Feb 02 11:34:54 compute-0 sudo[257986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:34:54 compute-0 sudo[257986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5458 keys, 14890929 bytes, temperature: kUnknown
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032094557979, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 14890929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14852227, "index_size": 23936, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 137732, "raw_average_key_size": 25, "raw_value_size": 14751228, "raw_average_value_size": 2702, "num_data_blocks": 987, "num_entries": 5458, "num_filter_entries": 5458, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770032094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.558277) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14890929 bytes
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.574059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.0 rd, 158.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.3 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 5982, records dropped: 524 output_compression: NoCompression
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.574108) EVENT_LOG_v1 {"time_micros": 1770032094574088, "job": 22, "event": "compaction_finished", "compaction_time_micros": 94272, "compaction_time_cpu_micros": 25118, "output_level": 6, "num_output_files": 1, "total_output_size": 14890929, "num_input_records": 5982, "num_output_records": 5458, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032094574825, "job": 22, "event": "table_file_deletion", "file_number": 46}
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032094576148, "job": 22, "event": "table_file_deletion", "file_number": 44}
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.463621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.576218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.576225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.576227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.576229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:34:54 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:34:54.576231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:34:54 compute-0 podman[258055]: 2026-02-02 11:34:54.822997783 +0000 UTC m=+0.035049567 container create 920a8b888334ab434cce9dc7830a06ed0cc43141b5669b04c56b89314149de66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.850 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.851 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79942433-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.851 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79942433-cf, col_values=(('external_ids', {'iface-id': '79942433-cf13-432a-ae35-76cf688e4dec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:87:ce', 'vm-uuid': '773f45b2-ee63-471e-8884-36748ebdf289'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:34:54 compute-0 podman[258055]: 2026-02-02 11:34:54.808167638 +0000 UTC m=+0.020219392 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:34:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:54.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.912 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:54 compute-0 NetworkManager[49067]: <info>  [1770032094.9130] manager: (tap79942433-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.914 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.919 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.920 251294 INFO os_vif [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:87:ce,bridge_name='br-int',has_traffic_filtering=True,id=79942433-cf13-432a-ae35-76cf688e4dec,network=Network(35ed898d-7ce9-4d75-8e38-edd8fff50f91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79942433-cf')
Feb 02 11:34:54 compute-0 systemd[1]: Started libpod-conmon-920a8b888334ab434cce9dc7830a06ed0cc43141b5669b04c56b89314149de66.scope.
Feb 02 11:34:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:54.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.979 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.980 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.981 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No VIF found with MAC fa:16:3e:5a:87:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 11:34:54 compute-0 nova_compute[251290]: 2026-02-02 11:34:54.981 251294 INFO nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Using config drive
Feb 02 11:34:54 compute-0 podman[258055]: 2026-02-02 11:34:54.986138975 +0000 UTC m=+0.198190759 container init 920a8b888334ab434cce9dc7830a06ed0cc43141b5669b04c56b89314149de66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:34:54 compute-0 podman[258055]: 2026-02-02 11:34:54.992335793 +0000 UTC m=+0.204387547 container start 920a8b888334ab434cce9dc7830a06ed0cc43141b5669b04c56b89314149de66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:34:54 compute-0 podman[258055]: 2026-02-02 11:34:54.996214554 +0000 UTC m=+0.208266318 container attach 920a8b888334ab434cce9dc7830a06ed0cc43141b5669b04c56b89314149de66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:34:55 compute-0 nice_ishizaka[258075]: 167 167
Feb 02 11:34:55 compute-0 systemd[1]: libpod-920a8b888334ab434cce9dc7830a06ed0cc43141b5669b04c56b89314149de66.scope: Deactivated successfully.
Feb 02 11:34:55 compute-0 podman[258055]: 2026-02-02 11:34:55.000753774 +0000 UTC m=+0.212805528 container died 920a8b888334ab434cce9dc7830a06ed0cc43141b5669b04c56b89314149de66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:34:55 compute-0 nova_compute[251290]: 2026-02-02 11:34:55.018 251294 DEBUG nova.storage.rbd_utils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 773f45b2-ee63-471e-8884-36748ebdf289_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9adbc3b868899c4cfa25c3d8284147011ea7df92b1aaf609dd3cafdef1a9d9d-merged.mount: Deactivated successfully.
Feb 02 11:34:55 compute-0 podman[258055]: 2026-02-02 11:34:55.077711093 +0000 UTC m=+0.289762847 container remove 920a8b888334ab434cce9dc7830a06ed0cc43141b5669b04c56b89314149de66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:34:55 compute-0 systemd[1]: libpod-conmon-920a8b888334ab434cce9dc7830a06ed0cc43141b5669b04c56b89314149de66.scope: Deactivated successfully.
Feb 02 11:34:55 compute-0 podman[258117]: 2026-02-02 11:34:55.218062729 +0000 UTC m=+0.043346325 container create f8e872929a34ccf895fd2050d58fed555086a7bb5e73349998e7dd035fbdc6b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb 02 11:34:55 compute-0 systemd[1]: Started libpod-conmon-f8e872929a34ccf895fd2050d58fed555086a7bb5e73349998e7dd035fbdc6b6.scope.
Feb 02 11:34:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde82edcf429cabb7bb390a773491cb50f0679d12bdea93406d93528a35218d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde82edcf429cabb7bb390a773491cb50f0679d12bdea93406d93528a35218d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde82edcf429cabb7bb390a773491cb50f0679d12bdea93406d93528a35218d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde82edcf429cabb7bb390a773491cb50f0679d12bdea93406d93528a35218d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:34:55 compute-0 podman[258117]: 2026-02-02 11:34:55.200499745 +0000 UTC m=+0.025783371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:34:55 compute-0 podman[258117]: 2026-02-02 11:34:55.312352925 +0000 UTC m=+0.137636551 container init f8e872929a34ccf895fd2050d58fed555086a7bb5e73349998e7dd035fbdc6b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:34:55 compute-0 podman[258117]: 2026-02-02 11:34:55.318761909 +0000 UTC m=+0.144045515 container start f8e872929a34ccf895fd2050d58fed555086a7bb5e73349998e7dd035fbdc6b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_allen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb 02 11:34:55 compute-0 podman[258117]: 2026-02-02 11:34:55.330691221 +0000 UTC m=+0.155974827 container attach f8e872929a34ccf895fd2050d58fed555086a7bb5e73349998e7dd035fbdc6b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_allen, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:34:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:55 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:55 compute-0 nova_compute[251290]: 2026-02-02 11:34:55.387 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:55 compute-0 ceph-mon[74676]: pgmap v693: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Feb 02 11:34:55 compute-0 nova_compute[251290]: 2026-02-02 11:34:55.498 251294 INFO nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Creating config drive at /var/lib/nova/instances/773f45b2-ee63-471e-8884-36748ebdf289/disk.config
Feb 02 11:34:55 compute-0 nova_compute[251290]: 2026-02-02 11:34:55.502 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/773f45b2-ee63-471e-8884-36748ebdf289/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpy8m_msys execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:55 compute-0 nova_compute[251290]: 2026-02-02 11:34:55.641 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/773f45b2-ee63-471e-8884-36748ebdf289/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpy8m_msys" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:55 compute-0 nova_compute[251290]: 2026-02-02 11:34:55.679 251294 DEBUG nova.storage.rbd_utils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 773f45b2-ee63-471e-8884-36748ebdf289_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:34:55 compute-0 nova_compute[251290]: 2026-02-02 11:34:55.686 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/773f45b2-ee63-471e-8884-36748ebdf289/disk.config 773f45b2-ee63-471e-8884-36748ebdf289_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:34:55 compute-0 nova_compute[251290]: 2026-02-02 11:34:55.886 251294 DEBUG oslo_concurrency.processutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/773f45b2-ee63-471e-8884-36748ebdf289/disk.config 773f45b2-ee63-471e-8884-36748ebdf289_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:34:55 compute-0 nova_compute[251290]: 2026-02-02 11:34:55.887 251294 INFO nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Deleting local config drive /var/lib/nova/instances/773f45b2-ee63-471e-8884-36748ebdf289/disk.config because it was imported into RBD.
Feb 02 11:34:55 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 02 11:34:55 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 02 11:34:56 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Feb 02 11:34:56 compute-0 kernel: tap79942433-cf: entered promiscuous mode
Feb 02 11:34:56 compute-0 NetworkManager[49067]: <info>  [1770032096.0068] manager: (tap79942433-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Feb 02 11:34:56 compute-0 lvm[258276]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:34:56 compute-0 lvm[258276]: VG ceph_vg0 finished
Feb 02 11:34:56 compute-0 systemd-udevd[258280]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:34:56 compute-0 ovn_controller[154901]: 2026-02-02T11:34:56Z|00027|binding|INFO|Claiming lport 79942433-cf13-432a-ae35-76cf688e4dec for this chassis.
Feb 02 11:34:56 compute-0 ovn_controller[154901]: 2026-02-02T11:34:56Z|00028|binding|INFO|79942433-cf13-432a-ae35-76cf688e4dec: Claiming fa:16:3e:5a:87:ce 10.100.0.5
Feb 02 11:34:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:56 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.045 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.070 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:87:ce 10.100.0.5'], port_security=['fa:16:3e:5a:87:ce 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '773f45b2-ee63-471e-8884-36748ebdf289', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35ed898d-7ce9-4d75-8e38-edd8fff50f91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '2', 'neutron:security_group_ids': '01599e6b-62e6-4447-b9b4-d21cf787b38f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06475576-b5d6-42de-962d-af0a5fd97532, chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=79942433-cf13-432a-ae35-76cf688e4dec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.071 165304 INFO neutron.agent.ovn.metadata.agent [-] Port 79942433-cf13-432a-ae35-76cf688e4dec in datapath 35ed898d-7ce9-4d75-8e38-edd8fff50f91 bound to our chassis
Feb 02 11:34:56 compute-0 NetworkManager[49067]: <info>  [1770032096.0790] device (tap79942433-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 11:34:56 compute-0 agitated_allen[258133]: {}
Feb 02 11:34:56 compute-0 NetworkManager[49067]: <info>  [1770032096.0799] device (tap79942433-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.083 165304 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35ed898d-7ce9-4d75-8e38-edd8fff50f91
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.085 165304 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpls3qq0l9/privsep.sock']
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.099 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:56 compute-0 ovn_controller[154901]: 2026-02-02T11:34:56Z|00029|binding|INFO|Setting lport 79942433-cf13-432a-ae35-76cf688e4dec ovn-installed in OVS
Feb 02 11:34:56 compute-0 ovn_controller[154901]: 2026-02-02T11:34:56Z|00030|binding|INFO|Setting lport 79942433-cf13-432a-ae35-76cf688e4dec up in Southbound
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.105 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:56 compute-0 systemd-machined[218018]: New machine qemu-1-instance-00000001.
Feb 02 11:34:56 compute-0 podman[258117]: 2026-02-02 11:34:56.115684997 +0000 UTC m=+0.940968603 container died f8e872929a34ccf895fd2050d58fed555086a7bb5e73349998e7dd035fbdc6b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_allen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:34:56 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Feb 02 11:34:56 compute-0 systemd[1]: libpod-f8e872929a34ccf895fd2050d58fed555086a7bb5e73349998e7dd035fbdc6b6.scope: Deactivated successfully.
Feb 02 11:34:56 compute-0 systemd[1]: libpod-f8e872929a34ccf895fd2050d58fed555086a7bb5e73349998e7dd035fbdc6b6.scope: Consumed 1.126s CPU time.
Feb 02 11:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-cde82edcf429cabb7bb390a773491cb50f0679d12bdea93406d93528a35218d5-merged.mount: Deactivated successfully.
Feb 02 11:34:56 compute-0 podman[258117]: 2026-02-02 11:34:56.170501071 +0000 UTC m=+0.995784677 container remove f8e872929a34ccf895fd2050d58fed555086a7bb5e73349998e7dd035fbdc6b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_allen, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:34:56 compute-0 systemd[1]: libpod-conmon-f8e872929a34ccf895fd2050d58fed555086a7bb5e73349998e7dd035fbdc6b6.scope: Deactivated successfully.
Feb 02 11:34:56 compute-0 sudo[257986]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:34:56 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:34:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:34:56 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:34:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Feb 02 11:34:56 compute-0 sudo[258312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:34:56 compute-0 sudo[258312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:34:56 compute-0 sudo[258312]: pam_unix(sudo:session): session closed for user root
Feb 02 11:34:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:56 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.589 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032096.5878384, 773f45b2-ee63-471e-8884-36748ebdf289 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.589 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] VM Started (Lifecycle Event)
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.638 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.645 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032096.5885482, 773f45b2-ee63-471e-8884-36748ebdf289 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.645 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] VM Paused (Lifecycle Event)
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.671 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.676 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 11:34:56 compute-0 nova_compute[251290]: 2026-02-02 11:34:56.700 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.897 165304 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.898 165304 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpls3qq0l9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.666 258380 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.671 258380 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.672 258380 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.673 258380 INFO oslo.privsep.daemon [-] privsep daemon running as pid 258380
Feb 02 11:34:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:56.900 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[a5016861-d9d6-4563-96fd-29936087f2fe]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:34:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:56.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:34:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:56.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:34:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:56] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Feb 02 11:34:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:34:56] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.084 251294 DEBUG nova.compute.manager [req-e05cd71e-d9e7-438a-88d0-47e445053938 req-fd7723fe-9418-4a07-a788-fa462cca6ef9 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received event network-vif-plugged-79942433-cf13-432a-ae35-76cf688e4dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.084 251294 DEBUG oslo_concurrency.lockutils [req-e05cd71e-d9e7-438a-88d0-47e445053938 req-fd7723fe-9418-4a07-a788-fa462cca6ef9 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "773f45b2-ee63-471e-8884-36748ebdf289-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.085 251294 DEBUG oslo_concurrency.lockutils [req-e05cd71e-d9e7-438a-88d0-47e445053938 req-fd7723fe-9418-4a07-a788-fa462cca6ef9 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.086 251294 DEBUG oslo_concurrency.lockutils [req-e05cd71e-d9e7-438a-88d0-47e445053938 req-fd7723fe-9418-4a07-a788-fa462cca6ef9 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.086 251294 DEBUG nova.compute.manager [req-e05cd71e-d9e7-438a-88d0-47e445053938 req-fd7723fe-9418-4a07-a788-fa462cca6ef9 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Processing event network-vif-plugged-79942433-cf13-432a-ae35-76cf688e4dec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.087 251294 DEBUG nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.090 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032097.0902545, 773f45b2-ee63-471e-8884-36748ebdf289 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.090 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] VM Resumed (Lifecycle Event)
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.093 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.097 251294 INFO nova.virt.libvirt.driver [-] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Instance spawned successfully.
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.099 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 11:34:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:34:57.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.119 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.125 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.129 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.129 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.129 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.130 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.130 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.131 251294 DEBUG nova.virt.libvirt.driver [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.158 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.202 251294 INFO nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Took 9.32 seconds to spawn the instance on the hypervisor.
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.203 251294 DEBUG nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:34:57 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:34:57 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:34:57 compute-0 ceph-mon[74676]: pgmap v694: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.268 251294 INFO nova.compute.manager [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Took 10.25 seconds to build instance.
Feb 02 11:34:57 compute-0 nova_compute[251290]: 2026-02-02 11:34:57.288 251294 DEBUG oslo_concurrency.lockutils [None req-21bc6955-a8c3-4185-9e53-97bb286de9b5 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:57 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:57 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:57.650 258380 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:57 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:57.650 258380 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:57 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:57.650 258380 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:58 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Feb 02 11:34:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:58 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:58 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:58.565 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a91dd5-aa79-4d24-b72e-af6376a16ec5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:34:58 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:58.566 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35ed898d-71 in ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 11:34:58 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:58.567 258380 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35ed898d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 11:34:58 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:58.567 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[915d22c3-ba54-48ff-b67b-5c5cd93c2714]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:34:58 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:58.570 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[9c135b03-00a5-438e-b5a1-d9370dbf19d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:34:58 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:58.586 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[216aed82-10a3-4645-b1cb-b41d45958bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:34:58 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:58.602 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[86605c9d-3e95-4c0f-b9aa-a8c2aab63990]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:34:58 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:58.604 165304 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmppcpgg4mo/privsep.sock']
Feb 02 11:34:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:34:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:34:58.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:34:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:34:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:34:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:34:58.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:34:59 compute-0 nova_compute[251290]: 2026-02-02 11:34:59.202 251294 DEBUG nova.compute.manager [req-d426246f-6f05-4867-82b9-a49b60c1f7f1 req-8d9695cb-8b50-45d0-897b-1f234b82ad88 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received event network-vif-plugged-79942433-cf13-432a-ae35-76cf688e4dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:34:59 compute-0 nova_compute[251290]: 2026-02-02 11:34:59.202 251294 DEBUG oslo_concurrency.lockutils [req-d426246f-6f05-4867-82b9-a49b60c1f7f1 req-8d9695cb-8b50-45d0-897b-1f234b82ad88 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "773f45b2-ee63-471e-8884-36748ebdf289-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:59 compute-0 nova_compute[251290]: 2026-02-02 11:34:59.203 251294 DEBUG oslo_concurrency.lockutils [req-d426246f-6f05-4867-82b9-a49b60c1f7f1 req-8d9695cb-8b50-45d0-897b-1f234b82ad88 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:59 compute-0 nova_compute[251290]: 2026-02-02 11:34:59.203 251294 DEBUG oslo_concurrency.lockutils [req-d426246f-6f05-4867-82b9-a49b60c1f7f1 req-8d9695cb-8b50-45d0-897b-1f234b82ad88 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:34:59 compute-0 nova_compute[251290]: 2026-02-02 11:34:59.204 251294 DEBUG nova.compute.manager [req-d426246f-6f05-4867-82b9-a49b60c1f7f1 req-8d9695cb-8b50-45d0-897b-1f234b82ad88 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] No waiting events found dispatching network-vif-plugged-79942433-cf13-432a-ae35-76cf688e4dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:34:59 compute-0 nova_compute[251290]: 2026-02-02 11:34:59.204 251294 WARNING nova.compute.manager [req-d426246f-6f05-4867-82b9-a49b60c1f7f1 req-8d9695cb-8b50-45d0-897b-1f234b82ad88 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received unexpected event network-vif-plugged-79942433-cf13-432a-ae35-76cf688e4dec for instance with vm_state active and task_state None.
Feb 02 11:34:59 compute-0 ceph-mon[74676]: pgmap v695: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Feb 02 11:34:59 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:59.345 165304 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 02 11:34:59 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:59.347 165304 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmppcpgg4mo/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 02 11:34:59 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:59.157 258398 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 11:34:59 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:59.161 258398 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 11:34:59 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:59.163 258398 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 02 11:34:59 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:59.164 258398 INFO oslo.privsep.daemon [-] privsep daemon running as pid 258398
Feb 02 11:34:59 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:59.351 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c0d251-1ddb-4ce8-b9ae-fbd9c426212d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:34:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:34:59 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:34:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:34:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:34:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:34:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:34:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:34:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:34:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:34:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:34:59 compute-0 nova_compute[251290]: 2026-02-02 11:34:59.912 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:34:59 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:59.975 258398 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:34:59 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:59.976 258398 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:34:59 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:34:59.977 258398 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:35:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:00 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v696: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 41 op/s
Feb 02 11:35:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:00 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:35:00 compute-0 nova_compute[251290]: 2026-02-02 11:35:00.389 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.672 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab52ef8-2b7b-4375-bbdb-c6d7f6786f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 NetworkManager[49067]: <info>  [1770032100.6931] manager: (tap35ed898d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.691 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[924fd962-c6cf-4062-9927-fa0c9fabf6b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 systemd-udevd[258412]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.726 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[79613cde-35ec-4a8d-b3ca-255334068784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.733 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[766d7dd2-e5f4-472a-b710-0ee1c28695fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 NetworkManager[49067]: <info>  [1770032100.7601] device (tap35ed898d-70): carrier: link connected
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.766 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1b907e-98a6-49b1-9726-c21834753549]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.784 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a2e6b2-9516-413a-82e0-4b7ea66d279c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35ed898d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:50:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 375502, 'reachable_time': 28799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258430, 'error': None, 'target': 'ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.798 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[60e7fb3a-8d09-4cc7-9198-cf21d38404cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:509c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 375502, 'tstamp': 375502}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258431, 'error': None, 'target': 'ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.812 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[8a74e8d6-f7ad-4282-9301-12ce2d654b65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35ed898d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:50:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 375502, 'reachable_time': 28799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258432, 'error': None, 'target': 'ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.835 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[e7389254-1ce6-4b37-b506-fa2b1fb937c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.893 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6e8d95-eae1-4c99-b8ea-0e7e087b5cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.895 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35ed898d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.896 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.896 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35ed898d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:35:00 compute-0 nova_compute[251290]: 2026-02-02 11:35:00.898 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:00 compute-0 NetworkManager[49067]: <info>  [1770032100.9005] manager: (tap35ed898d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Feb 02 11:35:00 compute-0 kernel: tap35ed898d-70: entered promiscuous mode
Feb 02 11:35:00 compute-0 nova_compute[251290]: 2026-02-02 11:35:00.903 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.905 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35ed898d-70, col_values=(('external_ids', {'iface-id': 'df86a146-b82b-473c-bddd-e17d5f261529'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:35:00 compute-0 nova_compute[251290]: 2026-02-02 11:35:00.906 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:00 compute-0 ovn_controller[154901]: 2026-02-02T11:35:00Z|00031|binding|INFO|Releasing lport df86a146-b82b-473c-bddd-e17d5f261529 from this chassis (sb_readonly=0)
Feb 02 11:35:00 compute-0 nova_compute[251290]: 2026-02-02 11:35:00.908 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.910 165304 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35ed898d-7ce9-4d75-8e38-edd8fff50f91.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35ed898d-7ce9-4d75-8e38-edd8fff50f91.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.911 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5f00d8-65a8-40f9-a713-76a562c89623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.912 165304 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: global
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     log         /dev/log local0 debug
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     log-tag     haproxy-metadata-proxy-35ed898d-7ce9-4d75-8e38-edd8fff50f91
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     user        root
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     group       root
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     maxconn     1024
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     pidfile     /var/lib/neutron/external/pids/35ed898d-7ce9-4d75-8e38-edd8fff50f91.pid.haproxy
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     daemon
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: defaults
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     log global
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     mode http
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     option httplog
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     option dontlognull
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     option http-server-close
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     option forwardfor
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     retries                 3
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     timeout http-request    30s
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     timeout connect         30s
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     timeout client          32s
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     timeout server          32s
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     timeout http-keep-alive 30s
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: listen listener
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     bind 169.254.169.254:80
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:     http-request add-header X-OVN-Network-ID 35ed898d-7ce9-4d75-8e38-edd8fff50f91
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 11:35:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:00.913 165304 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91', 'env', 'PROCESS_TAG=haproxy-35ed898d-7ce9-4d75-8e38-edd8fff50f91', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35ed898d-7ce9-4d75-8e38-edd8fff50f91.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 11:35:00 compute-0 nova_compute[251290]: 2026-02-02 11:35:00.914 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:00.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:00.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:01 compute-0 podman[258465]: 2026-02-02 11:35:01.30915239 +0000 UTC m=+0.062140594 container create f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 02 11:35:01 compute-0 systemd[1]: Started libpod-conmon-f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb.scope.
Feb 02 11:35:01 compute-0 ceph-mon[74676]: pgmap v696: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 41 op/s
Feb 02 11:35:01 compute-0 podman[258465]: 2026-02-02 11:35:01.274554277 +0000 UTC m=+0.027542491 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 11:35:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a511a25c4d7f15a54c84fa3428ece3f13e542f18283407bd64a1a8a9b5022ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 11:35:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:01 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:01 compute-0 podman[258465]: 2026-02-02 11:35:01.399173954 +0000 UTC m=+0.152162178 container init f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 02 11:35:01 compute-0 podman[258465]: 2026-02-02 11:35:01.404377523 +0000 UTC m=+0.157365727 container start f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 02 11:35:01 compute-0 neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91[258480]: [NOTICE]   (258484) : New worker (258486) forked
Feb 02 11:35:01 compute-0 neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91[258480]: [NOTICE]   (258484) : Loading success.
Feb 02 11:35:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Feb 02 11:35:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Feb 02 11:35:01 compute-0 ceph-mon[74676]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Feb 02 11:35:01 compute-0 NetworkManager[49067]: <info>  [1770032101.6345] manager: (patch-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Feb 02 11:35:01 compute-0 NetworkManager[49067]: <info>  [1770032101.6350] device (patch-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:35:01 compute-0 NetworkManager[49067]: <warn>  [1770032101.6352] device (patch-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 11:35:01 compute-0 nova_compute[251290]: 2026-02-02 11:35:01.633 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:01 compute-0 ovn_controller[154901]: 2026-02-02T11:35:01Z|00032|binding|INFO|Releasing lport df86a146-b82b-473c-bddd-e17d5f261529 from this chassis (sb_readonly=0)
Feb 02 11:35:01 compute-0 NetworkManager[49067]: <info>  [1770032101.6399] manager: (patch-br-int-to-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Feb 02 11:35:01 compute-0 NetworkManager[49067]: <info>  [1770032101.6404] device (patch-br-int-to-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 11:35:01 compute-0 NetworkManager[49067]: <warn>  [1770032101.6405] device (patch-br-int-to-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 11:35:01 compute-0 NetworkManager[49067]: <info>  [1770032101.6413] manager: (patch-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Feb 02 11:35:01 compute-0 NetworkManager[49067]: <info>  [1770032101.6419] manager: (patch-br-int-to-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Feb 02 11:35:01 compute-0 NetworkManager[49067]: <info>  [1770032101.6424] device (patch-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 02 11:35:01 compute-0 NetworkManager[49067]: <info>  [1770032101.6441] device (patch-br-int-to-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 02 11:35:01 compute-0 ovn_controller[154901]: 2026-02-02T11:35:01Z|00033|binding|INFO|Releasing lport df86a146-b82b-473c-bddd-e17d5f261529 from this chassis (sb_readonly=0)
Feb 02 11:35:01 compute-0 nova_compute[251290]: 2026-02-02 11:35:01.645 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:01 compute-0 nova_compute[251290]: 2026-02-02 11:35:01.649 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:02 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Feb 02 11:35:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:02 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:02 compute-0 ceph-mon[74676]: osdmap e148: 3 total, 3 up, 3 in
Feb 02 11:35:02 compute-0 ceph-mon[74676]: pgmap v698: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Feb 02 11:35:02 compute-0 nova_compute[251290]: 2026-02-02 11:35:02.709 251294 DEBUG nova.compute.manager [req-6c2d4c22-bc34-40fc-b05b-d8eca53fa51f req-d3e814b1-e744-4564-bda5-f9129b36e2c8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received event network-changed-79942433-cf13-432a-ae35-76cf688e4dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:35:02 compute-0 nova_compute[251290]: 2026-02-02 11:35:02.710 251294 DEBUG nova.compute.manager [req-6c2d4c22-bc34-40fc-b05b-d8eca53fa51f req-d3e814b1-e744-4564-bda5-f9129b36e2c8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Refreshing instance network info cache due to event network-changed-79942433-cf13-432a-ae35-76cf688e4dec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:35:02 compute-0 nova_compute[251290]: 2026-02-02 11:35:02.710 251294 DEBUG oslo_concurrency.lockutils [req-6c2d4c22-bc34-40fc-b05b-d8eca53fa51f req-d3e814b1-e744-4564-bda5-f9129b36e2c8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:35:02 compute-0 nova_compute[251290]: 2026-02-02 11:35:02.710 251294 DEBUG oslo_concurrency.lockutils [req-6c2d4c22-bc34-40fc-b05b-d8eca53fa51f req-d3e814b1-e744-4564-bda5-f9129b36e2c8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:35:02 compute-0 nova_compute[251290]: 2026-02-02 11:35:02.711 251294 DEBUG nova.network.neutron [req-6c2d4c22-bc34-40fc-b05b-d8eca53fa51f req-d3e814b1-e744-4564-bda5-f9129b36e2c8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Refreshing network info cache for port 79942433-cf13-432a-ae35-76cf688e4dec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:35:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:02.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:02.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:03 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:04 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v699: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Feb 02 11:35:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:04 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:04 compute-0 nova_compute[251290]: 2026-02-02 11:35:04.914 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:35:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:04.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:35:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:04.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:05 compute-0 sudo[258499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:35:05 compute-0 sudo[258499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:35:05 compute-0 sudo[258499]: pam_unix(sudo:session): session closed for user root
Feb 02 11:35:05 compute-0 ceph-mon[74676]: pgmap v699: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Feb 02 11:35:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:05 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:05 compute-0 nova_compute[251290]: 2026-02-02 11:35:05.391 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:05 compute-0 nova_compute[251290]: 2026-02-02 11:35:05.871 251294 DEBUG nova.network.neutron [req-6c2d4c22-bc34-40fc-b05b-d8eca53fa51f req-d3e814b1-e744-4564-bda5-f9129b36e2c8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updated VIF entry in instance network info cache for port 79942433-cf13-432a-ae35-76cf688e4dec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:35:05 compute-0 nova_compute[251290]: 2026-02-02 11:35:05.872 251294 DEBUG nova.network.neutron [req-6c2d4c22-bc34-40fc-b05b-d8eca53fa51f req-d3e814b1-e744-4564-bda5-f9129b36e2c8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updating instance_info_cache with network_info: [{"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:35:05 compute-0 nova_compute[251290]: 2026-02-02 11:35:05.890 251294 DEBUG oslo_concurrency.lockutils [req-6c2d4c22-bc34-40fc-b05b-d8eca53fa51f req-d3e814b1-e744-4564-bda5-f9129b36e2c8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:35:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:06 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058001ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v700: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Feb 02 11:35:06 compute-0 podman[258526]: 2026-02-02 11:35:06.267849675 +0000 UTC m=+0.055261107 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 02 11:35:06 compute-0 podman[258527]: 2026-02-02 11:35:06.291631898 +0000 UTC m=+0.076703273 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Feb 02 11:35:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:06 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:06.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:06.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:06] "GET /metrics HTTP/1.1" 200 48386 "" "Prometheus/2.51.0"
Feb 02 11:35:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:06] "GET /metrics HTTP/1.1" 200 48386 "" "Prometheus/2.51.0"
Feb 02 11:35:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:35:07.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:35:07 compute-0 ceph-mon[74676]: pgmap v700: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Feb 02 11:35:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:07 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:08 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Feb 02 11:35:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:08 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058001ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:08.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:08.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:09 compute-0 ovn_controller[154901]: 2026-02-02T11:35:09Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:87:ce 10.100.0.5
Feb 02 11:35:09 compute-0 ovn_controller[154901]: 2026-02-02T11:35:09Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:87:ce 10.100.0.5
Feb 02 11:35:09 compute-0 ceph-mon[74676]: pgmap v701: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Feb 02 11:35:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:09 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:09 compute-0 nova_compute[251290]: 2026-02-02 11:35:09.917 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:10 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Feb 02 11:35:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:10 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:10 compute-0 nova_compute[251290]: 2026-02-02 11:35:10.393 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:10.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:10.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:11 compute-0 ceph-mon[74676]: pgmap v702: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Feb 02 11:35:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:11 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058001ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:12 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v703: 353 pgs: 353 active+clean; 121 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 262 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Feb 02 11:35:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:12 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:12.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:12.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:13 compute-0 ceph-mon[74676]: pgmap v703: 353 pgs: 353 active+clean; 121 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 262 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Feb 02 11:35:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:13 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:14 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058003370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v704: 353 pgs: 353 active+clean; 121 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Feb 02 11:35:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:14 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:35:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:35:14 compute-0 nova_compute[251290]: 2026-02-02 11:35:14.919 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:14.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:14.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:15 compute-0 nova_compute[251290]: 2026-02-02 11:35:15.200 251294 INFO nova.compute.manager [None req-d5914882-49bf-4dca-aecd-e80852f6a9b8 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Get console output
Feb 02 11:35:15 compute-0 nova_compute[251290]: 2026-02-02 11:35:15.206 251294 INFO oslo.privsep.daemon [None req-d5914882-49bf-4dca-aecd-e80852f6a9b8 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp__loahi8/privsep.sock']
Feb 02 11:35:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:15 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:15 compute-0 nova_compute[251290]: 2026-02-02 11:35:15.395 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:15 compute-0 ceph-mon[74676]: pgmap v704: 353 pgs: 353 active+clean; 121 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Feb 02 11:35:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:35:15 compute-0 nova_compute[251290]: 2026-02-02 11:35:15.930 251294 INFO oslo.privsep.daemon [None req-d5914882-49bf-4dca-aecd-e80852f6a9b8 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Spawned new privsep daemon via rootwrap
Feb 02 11:35:15 compute-0 nova_compute[251290]: 2026-02-02 11:35:15.779 258588 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 11:35:15 compute-0 nova_compute[251290]: 2026-02-02 11:35:15.783 258588 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 11:35:15 compute-0 nova_compute[251290]: 2026-02-02 11:35:15.785 258588 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 02 11:35:15 compute-0 nova_compute[251290]: 2026-02-02 11:35:15.785 258588 INFO oslo.privsep.daemon [-] privsep daemon running as pid 258588
Feb 02 11:35:16 compute-0 nova_compute[251290]: 2026-02-02 11:35:16.029 258588 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 02 11:35:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:16 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v705: 353 pgs: 353 active+clean; 121 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Feb 02 11:35:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:16 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058003370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:16.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:16] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Feb 02 11:35:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:16] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Feb 02 11:35:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:16.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:35:17.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:35:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:17 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:17 compute-0 ceph-mon[74676]: pgmap v705: 353 pgs: 353 active+clean; 121 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Feb 02 11:35:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:18 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v706: 353 pgs: 353 active+clean; 121 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Feb 02 11:35:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:18 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:18 compute-0 ceph-mon[74676]: pgmap v706: 353 pgs: 353 active+clean; 121 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Feb 02 11:35:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:18.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:19.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:19 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058003370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:19 compute-0 nova_compute[251290]: 2026-02-02 11:35:19.921 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:20 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 121 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Feb 02 11:35:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:20 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:20 compute-0 nova_compute[251290]: 2026-02-02 11:35:20.397 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:20.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:21.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:21 compute-0 ceph-mon[74676]: pgmap v707: 353 pgs: 353 active+clean; 121 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Feb 02 11:35:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:21 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:22 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v708: 353 pgs: 353 active+clean; 121 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 238 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Feb 02 11:35:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:22 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:22.670 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:35:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:22.670 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:35:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:22.671 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:35:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:35:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:22.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:35:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:23.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:23 compute-0 ceph-mon[74676]: pgmap v708: 353 pgs: 353 active+clean; 121 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 238 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Feb 02 11:35:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:23 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:24 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 121 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 12 KiB/s wr, 0 op/s
Feb 02 11:35:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:24 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:24 compute-0 nova_compute[251290]: 2026-02-02 11:35:24.923 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:24.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:25.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:25 compute-0 sudo[258599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:35:25 compute-0 sudo[258599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:35:25 compute-0 sudo[258599]: pam_unix(sudo:session): session closed for user root
Feb 02 11:35:25 compute-0 ceph-mon[74676]: pgmap v709: 353 pgs: 353 active+clean; 121 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 12 KiB/s wr, 0 op/s
Feb 02 11:35:25 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/621901916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:25 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:25 compute-0 nova_compute[251290]: 2026-02-02 11:35:25.400 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:26 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v710: 353 pgs: 353 active+clean; 121 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb 02 11:35:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:26 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:35:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/253388194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:26 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:26.497 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:35:26 compute-0 nova_compute[251290]: 2026-02-02 11:35:26.498 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:26 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:26.499 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:35:26 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:35:26.500 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:35:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:35:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:26.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:35:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:26] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Feb 02 11:35:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:26] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Feb 02 11:35:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:35:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:27.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:35:27 compute-0 nova_compute[251290]: 2026-02-02 11:35:27.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:35:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:35:27.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:35:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:35:27.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:35:27 compute-0 ceph-mon[74676]: pgmap v710: 353 pgs: 353 active+clean; 121 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb 02 11:35:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/253388194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3283593039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:27 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:28 compute-0 nova_compute[251290]: 2026-02-02 11:35:28.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:35:28 compute-0 nova_compute[251290]: 2026-02-02 11:35:28.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:35:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:28 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v711: 353 pgs: 353 active+clean; 121 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Feb 02 11:35:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:28 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:35:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000058s ======
Feb 02 11:35:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.002000058s ======
Feb 02 11:35:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:29.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Feb 02 11:35:29 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:29.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Feb 02 11:35:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:29 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:29 compute-0 ceph-mon[74676]: pgmap v711: 353 pgs: 353 active+clean; 121 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Feb 02 11:35:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/514399948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:35:29
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', '.nfs', 'cephfs.cephfs.data', 'vms', 'backups', 'cephfs.cephfs.meta', 'volumes']
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:35:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:35:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007595910049163248 of space, bias 1.0, pg target 0.22787730147489746 quantized to 32 (current 32)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:35:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:35:29 compute-0 nova_compute[251290]: 2026-02-02 11:35:29.926 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.045 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.045 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.046 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.046 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.046 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:35:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:30 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v712: 353 pgs: 353 active+clean; 121 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Feb 02 11:35:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:30 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.402 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:35:30 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1152306279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:35:30 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/930254995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:35:30 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336104993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.558 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.639 251294 DEBUG nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.640 251294 DEBUG nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 11:35:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:35:30 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2433636648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.810 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.812 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4440MB free_disk=59.94271469116211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.812 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.812 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.893 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Instance 773f45b2-ee63-471e-8884-36748ebdf289 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.894 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.894 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:35:30 compute-0 nova_compute[251290]: 2026-02-02 11:35:30.944 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:35:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:35:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:31 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:31 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:35:31 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2197944567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:31 compute-0 nova_compute[251290]: 2026-02-02 11:35:31.432 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:35:31 compute-0 nova_compute[251290]: 2026-02-02 11:35:31.439 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:35:31 compute-0 ceph-mon[74676]: pgmap v712: 353 pgs: 353 active+clean; 121 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Feb 02 11:35:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3336104993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2433636648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2467310578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2197944567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:31 compute-0 nova_compute[251290]: 2026-02-02 11:35:31.483 251294 ERROR nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [req-b19052af-266b-428a-804a-9ff9e6502357] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 92919e7b-7846-4645-9401-9fd55bbbf435.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-b19052af-266b-428a-804a-9ff9e6502357"}]}
Feb 02 11:35:31 compute-0 nova_compute[251290]: 2026-02-02 11:35:31.503 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing inventories for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 11:35:31 compute-0 nova_compute[251290]: 2026-02-02 11:35:31.530 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating ProviderTree inventory for provider 92919e7b-7846-4645-9401-9fd55bbbf435 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 11:35:31 compute-0 nova_compute[251290]: 2026-02-02 11:35:31.531 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:35:31 compute-0 nova_compute[251290]: 2026-02-02 11:35:31.546 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing aggregate associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 11:35:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:31 compute-0 nova_compute[251290]: 2026-02-02 11:35:31.577 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing trait associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 11:35:31 compute-0 nova_compute[251290]: 2026-02-02 11:35:31.620 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:35:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:32 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:35:32 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3138433748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:32 compute-0 nova_compute[251290]: 2026-02-02 11:35:32.130 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:35:32 compute-0 nova_compute[251290]: 2026-02-02 11:35:32.136 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:35:32 compute-0 nova_compute[251290]: 2026-02-02 11:35:32.188 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updated inventory for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 02 11:35:32 compute-0 nova_compute[251290]: 2026-02-02 11:35:32.189 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating resource provider 92919e7b-7846-4645-9401-9fd55bbbf435 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 02 11:35:32 compute-0 nova_compute[251290]: 2026-02-02 11:35:32.189 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:35:32 compute-0 nova_compute[251290]: 2026-02-02 11:35:32.225 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:35:32 compute-0 nova_compute[251290]: 2026-02-02 11:35:32.226 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:35:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v713: 353 pgs: 353 active+clean; 167 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 02 11:35:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:32 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3138433748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:33.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:33 compute-0 nova_compute[251290]: 2026-02-02 11:35:33.226 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:35:33 compute-0 nova_compute[251290]: 2026-02-02 11:35:33.227 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:35:33 compute-0 nova_compute[251290]: 2026-02-02 11:35:33.247 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:35:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:33 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:33 compute-0 ceph-mon[74676]: pgmap v713: 353 pgs: 353 active+clean; 167 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 02 11:35:34 compute-0 nova_compute[251290]: 2026-02-02 11:35:34.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:35:34 compute-0 nova_compute[251290]: 2026-02-02 11:35:34.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:35:34 compute-0 nova_compute[251290]: 2026-02-02 11:35:34.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:35:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:34 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v714: 353 pgs: 353 active+clean; 167 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 02 11:35:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:34 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:34 compute-0 nova_compute[251290]: 2026-02-02 11:35:34.689 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:35:34 compute-0 nova_compute[251290]: 2026-02-02 11:35:34.690 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquired lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:35:34 compute-0 nova_compute[251290]: 2026-02-02 11:35:34.690 251294 DEBUG nova.network.neutron [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 11:35:34 compute-0 nova_compute[251290]: 2026-02-02 11:35:34.691 251294 DEBUG nova.objects.instance [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 773f45b2-ee63-471e-8884-36748ebdf289 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:35:34 compute-0 nova_compute[251290]: 2026-02-02 11:35:34.928 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:35.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:35 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:35 compute-0 nova_compute[251290]: 2026-02-02 11:35:35.405 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:35 compute-0 ceph-mon[74676]: pgmap v714: 353 pgs: 353 active+clean; 167 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 02 11:35:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:36 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 167 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb 02 11:35:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:36 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113536 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:35:36 compute-0 ceph-mon[74676]: pgmap v715: 353 pgs: 353 active+clean; 167 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb 02 11:35:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:36 compute-0 nova_compute[251290]: 2026-02-02 11:35:36.853 251294 DEBUG nova.network.neutron [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updating instance_info_cache with network_info: [{"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:35:36 compute-0 nova_compute[251290]: 2026-02-02 11:35:36.872 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Releasing lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:35:36 compute-0 nova_compute[251290]: 2026-02-02 11:35:36.872 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 11:35:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:36] "GET /metrics HTTP/1.1" 200 48387 "" "Prometheus/2.51.0"
Feb 02 11:35:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:36] "GET /metrics HTTP/1.1" 200 48387 "" "Prometheus/2.51.0"
Feb 02 11:35:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:37.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:35:37.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:35:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:35:37.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:35:37 compute-0 podman[258703]: 2026-02-02 11:35:37.273265289 +0000 UTC m=+0.062022290 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 11:35:37 compute-0 podman[258704]: 2026-02-02 11:35:37.318811855 +0000 UTC m=+0.105076784 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 02 11:35:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:37 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:38 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v716: 353 pgs: 353 active+clean; 167 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Feb 02 11:35:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:38 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:39.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:39.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:39 compute-0 ceph-mon[74676]: pgmap v716: 353 pgs: 353 active+clean; 167 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Feb 02 11:35:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:39 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a6a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:39 compute-0 nova_compute[251290]: 2026-02-02 11:35:39.897 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:39 compute-0 nova_compute[251290]: 2026-02-02 11:35:39.930 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:40 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 167 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Feb 02 11:35:40 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:40 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:40 compute-0 nova_compute[251290]: 2026-02-02 11:35:40.406 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:35:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:41.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:41 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:41.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:41 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:41 compute-0 ceph-mon[74676]: pgmap v717: 353 pgs: 353 active+clean; 167 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Feb 02 11:35:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:42 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a6c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v718: 353 pgs: 353 active+clean; 167 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb 02 11:35:42 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:42 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:35:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:43.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:43 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:43.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:43 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:43 compute-0 ceph-mon[74676]: pgmap v718: 353 pgs: 353 active+clean; 167 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb 02 11:35:44 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 02 11:35:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:44 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 167 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb 02 11:35:44 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:44 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a6e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2504530478' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:35:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2504530478' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:35:44 compute-0 ceph-mon[74676]: pgmap v719: 353 pgs: 353 active+clean; 167 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb 02 11:35:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:35:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:35:44 compute-0 nova_compute[251290]: 2026-02-02 11:35:44.933 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:45.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:45.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:45 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:35:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113545 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:35:45 compute-0 sudo[258755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:35:45 compute-0 sudo[258755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:35:45 compute-0 sudo[258755]: pam_unix(sudo:session): session closed for user root
Feb 02 11:35:45 compute-0 nova_compute[251290]: 2026-02-02 11:35:45.408 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:45 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:35:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:46 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v720: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Feb 02 11:35:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:46 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:46 compute-0 ceph-mon[74676]: pgmap v720: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Feb 02 11:35:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:46] "GET /metrics HTTP/1.1" 200 48391 "" "Prometheus/2.51.0"
Feb 02 11:35:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:46] "GET /metrics HTTP/1.1" 200 48391 "" "Prometheus/2.51.0"
Feb 02 11:35:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:35:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:47 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:47.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:35:47.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:35:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:35:47.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:35:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:47 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:48 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:35:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:48 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:35:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:48 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v721: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 290 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Feb 02 11:35:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:48 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:35:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:49.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:49 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:49.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:49 compute-0 ceph-mon[74676]: pgmap v721: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 290 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Feb 02 11:35:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:49 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:49 compute-0 nova_compute[251290]: 2026-02-02 11:35:49.935 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:50 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 290 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Feb 02 11:35:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:50 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:50 compute-0 nova_compute[251290]: 2026-02-02 11:35:50.410 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:35:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:51 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:51.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:51 compute-0 ceph-mon[74676]: pgmap v722: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 290 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Feb 02 11:35:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:51 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:51 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:35:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:51 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:35:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:52 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v723: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:35:52 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:52 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:35:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:35:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:53.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:35:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:53 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:53.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:53 compute-0 ceph-mon[74676]: pgmap v723: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:35:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:53 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:54 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:35:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:54 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:54 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:54 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:35:54 compute-0 nova_compute[251290]: 2026-02-02 11:35:54.937 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:35:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:55.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:55 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:55.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:55 compute-0 ceph-mon[74676]: pgmap v724: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:35:55 compute-0 nova_compute[251290]: 2026-02-02 11:35:55.412 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:35:55 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:55 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:56 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v725: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Feb 02 11:35:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:56 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:56 compute-0 sudo[258793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:35:56 compute-0 sudo[258793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:35:56 compute-0 sudo[258793]: pam_unix(sudo:session): session closed for user root
Feb 02 11:35:56 compute-0 sudo[258818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:35:56 compute-0 sudo[258818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:35:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:35:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:56] "GET /metrics HTTP/1.1" 200 48391 "" "Prometheus/2.51.0"
Feb 02 11:35:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:35:56] "GET /metrics HTTP/1.1" 200 48391 "" "Prometheus/2.51.0"
Feb 02 11:35:57 compute-0 sudo[258818]: pam_unix(sudo:session): session closed for user root
Feb 02 11:35:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:57.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:57.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:35:57.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:35:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:35:57.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:35:57 compute-0 ceph-mon[74676]: pgmap v725: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Feb 02 11:35:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:57 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:57 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:35:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:58 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 107 KiB/s wr, 17 op/s
Feb 02 11:35:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:58 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1357375325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:35:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:35:58 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:35:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:35:58 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:35:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:35:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:35:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:35:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:35:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:35:59.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:35:59 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:35:59.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:35:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb 02 11:35:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:35:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:35:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:35:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:35:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:35:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:35:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:35:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:35:59 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:35:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:35:59 compute-0 sudo[258877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:35:59 compute-0 sudo[258877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:35:59 compute-0 sudo[258877]: pam_unix(sudo:session): session closed for user root
Feb 02 11:35:59 compute-0 sudo[258902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:35:59 compute-0 sudo[258902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:35:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:35:59 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:35:59 compute-0 ceph-mon[74676]: pgmap v726: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 107 KiB/s wr, 17 op/s
Feb 02 11:35:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:35:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:35:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:35:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:35:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:35:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:35:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:35:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:35:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:35:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:35:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:35:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:35:59 compute-0 podman[258969]: 2026-02-02 11:35:59.830216078 +0000 UTC m=+0.046176425 container create 0443478c8e18567d54296a11ab3f41f8f0edbbf820f8d7aebcf751d4601e93b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_darwin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:35:59 compute-0 systemd[1]: Started libpod-conmon-0443478c8e18567d54296a11ab3f41f8f0edbbf820f8d7aebcf751d4601e93b9.scope.
Feb 02 11:35:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:35:59 compute-0 podman[258969]: 2026-02-02 11:35:59.807416754 +0000 UTC m=+0.023377121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:35:59 compute-0 podman[258969]: 2026-02-02 11:35:59.918610802 +0000 UTC m=+0.134571169 container init 0443478c8e18567d54296a11ab3f41f8f0edbbf820f8d7aebcf751d4601e93b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:35:59 compute-0 podman[258969]: 2026-02-02 11:35:59.926076117 +0000 UTC m=+0.142036464 container start 0443478c8e18567d54296a11ab3f41f8f0edbbf820f8d7aebcf751d4601e93b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb 02 11:35:59 compute-0 podman[258969]: 2026-02-02 11:35:59.929720211 +0000 UTC m=+0.145680568 container attach 0443478c8e18567d54296a11ab3f41f8f0edbbf820f8d7aebcf751d4601e93b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 11:35:59 compute-0 reverent_darwin[258985]: 167 167
Feb 02 11:35:59 compute-0 systemd[1]: libpod-0443478c8e18567d54296a11ab3f41f8f0edbbf820f8d7aebcf751d4601e93b9.scope: Deactivated successfully.
Feb 02 11:35:59 compute-0 conmon[258985]: conmon 0443478c8e18567d5429 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0443478c8e18567d54296a11ab3f41f8f0edbbf820f8d7aebcf751d4601e93b9.scope/container/memory.events
Feb 02 11:35:59 compute-0 podman[258969]: 2026-02-02 11:35:59.934887009 +0000 UTC m=+0.150847356 container died 0443478c8e18567d54296a11ab3f41f8f0edbbf820f8d7aebcf751d4601e93b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:36:00 compute-0 nova_compute[251290]: 2026-02-02 11:36:00.062 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b1cd240d406643eef57884d055da9826f43afa88fae5f20aa3de34dedd0b7ae-merged.mount: Deactivated successfully.
Feb 02 11:36:00 compute-0 podman[258969]: 2026-02-02 11:36:00.096945536 +0000 UTC m=+0.312905883 container remove 0443478c8e18567d54296a11ab3f41f8f0edbbf820f8d7aebcf751d4601e93b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:36:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:00 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:00 compute-0 systemd[1]: libpod-conmon-0443478c8e18567d54296a11ab3f41f8f0edbbf820f8d7aebcf751d4601e93b9.scope: Deactivated successfully.
Feb 02 11:36:00 compute-0 podman[259009]: 2026-02-02 11:36:00.254839383 +0000 UTC m=+0.052820005 container create 12b1c27ad4f8543a1b362944cabf00045484a2da655ef0c2b773064e2f704324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:36:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 107 KiB/s wr, 17 op/s
Feb 02 11:36:00 compute-0 systemd[1]: Started libpod-conmon-12b1c27ad4f8543a1b362944cabf00045484a2da655ef0c2b773064e2f704324.scope.
Feb 02 11:36:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a36a5a0ac7b0959287d7f9793d04896f697ea48f566c0db41f6ca79b3c4bc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a36a5a0ac7b0959287d7f9793d04896f697ea48f566c0db41f6ca79b3c4bc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a36a5a0ac7b0959287d7f9793d04896f697ea48f566c0db41f6ca79b3c4bc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a36a5a0ac7b0959287d7f9793d04896f697ea48f566c0db41f6ca79b3c4bc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a36a5a0ac7b0959287d7f9793d04896f697ea48f566c0db41f6ca79b3c4bc2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:00 compute-0 podman[259009]: 2026-02-02 11:36:00.226891312 +0000 UTC m=+0.024871954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:36:00 compute-0 podman[259009]: 2026-02-02 11:36:00.332663935 +0000 UTC m=+0.130644577 container init 12b1c27ad4f8543a1b362944cabf00045484a2da655ef0c2b773064e2f704324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_booth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:36:00 compute-0 podman[259009]: 2026-02-02 11:36:00.341867748 +0000 UTC m=+0.139848370 container start 12b1c27ad4f8543a1b362944cabf00045484a2da655ef0c2b773064e2f704324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:36:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:00 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113600 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:36:00 compute-0 podman[259009]: 2026-02-02 11:36:00.377891441 +0000 UTC m=+0.175872083 container attach 12b1c27ad4f8543a1b362944cabf00045484a2da655ef0c2b773064e2f704324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:36:00 compute-0 nova_compute[251290]: 2026-02-02 11:36:00.414 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:36:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:00 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:36:00 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:00 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:36:00 compute-0 magical_booth[259025]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:36:00 compute-0 magical_booth[259025]: --> All data devices are unavailable
Feb 02 11:36:00 compute-0 systemd[1]: libpod-12b1c27ad4f8543a1b362944cabf00045484a2da655ef0c2b773064e2f704324.scope: Deactivated successfully.
Feb 02 11:36:00 compute-0 podman[259009]: 2026-02-02 11:36:00.690435763 +0000 UTC m=+0.488416385 container died 12b1c27ad4f8543a1b362944cabf00045484a2da655ef0c2b773064e2f704324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-51a36a5a0ac7b0959287d7f9793d04896f697ea48f566c0db41f6ca79b3c4bc2-merged.mount: Deactivated successfully.
Feb 02 11:36:00 compute-0 podman[259009]: 2026-02-02 11:36:00.744011969 +0000 UTC m=+0.541992591 container remove 12b1c27ad4f8543a1b362944cabf00045484a2da655ef0c2b773064e2f704324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_booth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:36:00 compute-0 systemd[1]: libpod-conmon-12b1c27ad4f8543a1b362944cabf00045484a2da655ef0c2b773064e2f704324.scope: Deactivated successfully.
Feb 02 11:36:00 compute-0 sudo[258902]: pam_unix(sudo:session): session closed for user root
Feb 02 11:36:00 compute-0 sudo[259053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:36:00 compute-0 sudo[259053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:36:00 compute-0 sudo[259053]: pam_unix(sudo:session): session closed for user root
Feb 02 11:36:00 compute-0 sudo[259078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:36:00 compute-0 sudo[259078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:36:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:36:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:01.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:01 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:01.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:01 compute-0 ovn_controller[154901]: 2026-02-02T11:36:01Z|00034|binding|INFO|Releasing lport df86a146-b82b-473c-bddd-e17d5f261529 from this chassis (sb_readonly=0)
Feb 02 11:36:01 compute-0 nova_compute[251290]: 2026-02-02 11:36:01.182 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:01 compute-0 podman[259143]: 2026-02-02 11:36:01.282428597 +0000 UTC m=+0.040895514 container create 25980a6bffa8df65c4d177c580a12ebe23cab573cc185094611dcaa2fd4bd384 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:36:01 compute-0 systemd[1]: Started libpod-conmon-25980a6bffa8df65c4d177c580a12ebe23cab573cc185094611dcaa2fd4bd384.scope.
Feb 02 11:36:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:36:01 compute-0 podman[259143]: 2026-02-02 11:36:01.265040168 +0000 UTC m=+0.023507105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:36:01 compute-0 podman[259143]: 2026-02-02 11:36:01.369254266 +0000 UTC m=+0.127721213 container init 25980a6bffa8df65c4d177c580a12ebe23cab573cc185094611dcaa2fd4bd384 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:36:01 compute-0 podman[259143]: 2026-02-02 11:36:01.376160834 +0000 UTC m=+0.134627751 container start 25980a6bffa8df65c4d177c580a12ebe23cab573cc185094611dcaa2fd4bd384 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:36:01 compute-0 podman[259143]: 2026-02-02 11:36:01.380946002 +0000 UTC m=+0.139412939 container attach 25980a6bffa8df65c4d177c580a12ebe23cab573cc185094611dcaa2fd4bd384 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:36:01 compute-0 stupefied_davinci[259158]: 167 167
Feb 02 11:36:01 compute-0 systemd[1]: libpod-25980a6bffa8df65c4d177c580a12ebe23cab573cc185094611dcaa2fd4bd384.scope: Deactivated successfully.
Feb 02 11:36:01 compute-0 podman[259143]: 2026-02-02 11:36:01.382314471 +0000 UTC m=+0.140781418 container died 25980a6bffa8df65c4d177c580a12ebe23cab573cc185094611dcaa2fd4bd384 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:36:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaf29c1ac904cd7e313193026fc34d6fc86b2bcf50597301cba75de6ed79b68c-merged.mount: Deactivated successfully.
Feb 02 11:36:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:01 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:01 compute-0 podman[259143]: 2026-02-02 11:36:01.419287981 +0000 UTC m=+0.177754898 container remove 25980a6bffa8df65c4d177c580a12ebe23cab573cc185094611dcaa2fd4bd384 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:36:01 compute-0 systemd[1]: libpod-conmon-25980a6bffa8df65c4d177c580a12ebe23cab573cc185094611dcaa2fd4bd384.scope: Deactivated successfully.
Feb 02 11:36:01 compute-0 ceph-mon[74676]: pgmap v727: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 107 KiB/s wr, 17 op/s
Feb 02 11:36:01 compute-0 podman[259184]: 2026-02-02 11:36:01.571591608 +0000 UTC m=+0.053403742 container create 86d3ca2d25d1aa3e6d9f2fe536a07cae7ebfe769ca6ecb4c73aee2e3f8c4c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:36:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:01 compute-0 systemd[1]: Started libpod-conmon-86d3ca2d25d1aa3e6d9f2fe536a07cae7ebfe769ca6ecb4c73aee2e3f8c4c8c0.scope.
Feb 02 11:36:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:36:01 compute-0 podman[259184]: 2026-02-02 11:36:01.543210124 +0000 UTC m=+0.025022288 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6595b89c1fded4f8606a7375335e35adc35d40f2a5fc219dc610f1743b59b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6595b89c1fded4f8606a7375335e35adc35d40f2a5fc219dc610f1743b59b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6595b89c1fded4f8606a7375335e35adc35d40f2a5fc219dc610f1743b59b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6595b89c1fded4f8606a7375335e35adc35d40f2a5fc219dc610f1743b59b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:01 compute-0 podman[259184]: 2026-02-02 11:36:01.659068806 +0000 UTC m=+0.140880960 container init 86d3ca2d25d1aa3e6d9f2fe536a07cae7ebfe769ca6ecb4c73aee2e3f8c4c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:36:01 compute-0 podman[259184]: 2026-02-02 11:36:01.666574251 +0000 UTC m=+0.148386385 container start 86d3ca2d25d1aa3e6d9f2fe536a07cae7ebfe769ca6ecb4c73aee2e3f8c4c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:36:01 compute-0 podman[259184]: 2026-02-02 11:36:01.671098071 +0000 UTC m=+0.152910225 container attach 86d3ca2d25d1aa3e6d9f2fe536a07cae7ebfe769ca6ecb4c73aee2e3f8c4c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:36:01 compute-0 lucid_mayer[259201]: {
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:     "1": [
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:         {
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "devices": [
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "/dev/loop3"
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             ],
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "lv_name": "ceph_lv0",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "lv_size": "21470642176",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "name": "ceph_lv0",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "tags": {
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.cluster_name": "ceph",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.crush_device_class": "",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.encrypted": "0",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.osd_id": "1",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.type": "block",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.vdo": "0",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:                 "ceph.with_tpm": "0"
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             },
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "type": "block",
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:             "vg_name": "ceph_vg0"
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:         }
Feb 02 11:36:01 compute-0 lucid_mayer[259201]:     ]
Feb 02 11:36:01 compute-0 lucid_mayer[259201]: }
Feb 02 11:36:01 compute-0 systemd[1]: libpod-86d3ca2d25d1aa3e6d9f2fe536a07cae7ebfe769ca6ecb4c73aee2e3f8c4c8c0.scope: Deactivated successfully.
Feb 02 11:36:01 compute-0 conmon[259201]: conmon 86d3ca2d25d1aa3e6d9f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86d3ca2d25d1aa3e6d9f2fe536a07cae7ebfe769ca6ecb4c73aee2e3f8c4c8c0.scope/container/memory.events
Feb 02 11:36:01 compute-0 podman[259184]: 2026-02-02 11:36:01.985017362 +0000 UTC m=+0.466829516 container died 86d3ca2d25d1aa3e6d9f2fe536a07cae7ebfe769ca6ecb4c73aee2e3f8c4c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f6595b89c1fded4f8606a7375335e35adc35d40f2a5fc219dc610f1743b59b5-merged.mount: Deactivated successfully.
Feb 02 11:36:02 compute-0 podman[259184]: 2026-02-02 11:36:02.0254037 +0000 UTC m=+0.507215834 container remove 86d3ca2d25d1aa3e6d9f2fe536a07cae7ebfe769ca6ecb4c73aee2e3f8c4c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:36:02 compute-0 systemd[1]: libpod-conmon-86d3ca2d25d1aa3e6d9f2fe536a07cae7ebfe769ca6ecb4c73aee2e3f8c4c8c0.scope: Deactivated successfully.
Feb 02 11:36:02 compute-0 sudo[259078]: pam_unix(sudo:session): session closed for user root
Feb 02 11:36:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:02 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:02 compute-0 sudo[259223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:36:02 compute-0 sudo[259223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:36:02 compute-0 sudo[259223]: pam_unix(sudo:session): session closed for user root
Feb 02 11:36:02 compute-0 sudo[259248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:36:02 compute-0 sudo[259248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:36:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v728: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 111 KiB/s wr, 48 op/s
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.319 251294 DEBUG nova.compute.manager [req-6f48f215-bd7f-449c-8a79-e39b8ea57e76 req-7dad5728-eb52-48f0-9309-2e24c76da238 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received event network-changed-79942433-cf13-432a-ae35-76cf688e4dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.319 251294 DEBUG nova.compute.manager [req-6f48f215-bd7f-449c-8a79-e39b8ea57e76 req-7dad5728-eb52-48f0-9309-2e24c76da238 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Refreshing instance network info cache due to event network-changed-79942433-cf13-432a-ae35-76cf688e4dec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.319 251294 DEBUG oslo_concurrency.lockutils [req-6f48f215-bd7f-449c-8a79-e39b8ea57e76 req-7dad5728-eb52-48f0-9309-2e24c76da238 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.319 251294 DEBUG oslo_concurrency.lockutils [req-6f48f215-bd7f-449c-8a79-e39b8ea57e76 req-7dad5728-eb52-48f0-9309-2e24c76da238 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.320 251294 DEBUG nova.network.neutron [req-6f48f215-bd7f-449c-8a79-e39b8ea57e76 req-7dad5728-eb52-48f0-9309-2e24c76da238 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Refreshing network info cache for port 79942433-cf13-432a-ae35-76cf688e4dec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:36:02 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:02 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.409 251294 DEBUG oslo_concurrency.lockutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "773f45b2-ee63-471e-8884-36748ebdf289" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.410 251294 DEBUG oslo_concurrency.lockutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.411 251294 DEBUG oslo_concurrency.lockutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "773f45b2-ee63-471e-8884-36748ebdf289-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.411 251294 DEBUG oslo_concurrency.lockutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.412 251294 DEBUG oslo_concurrency.lockutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.414 251294 INFO nova.compute.manager [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Terminating instance
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.415 251294 DEBUG nova.compute.manager [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 11:36:02 compute-0 kernel: tap79942433-cf (unregistering): left promiscuous mode
Feb 02 11:36:02 compute-0 NetworkManager[49067]: <info>  [1770032162.4758] device (tap79942433-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 11:36:02 compute-0 ovn_controller[154901]: 2026-02-02T11:36:02Z|00035|binding|INFO|Releasing lport 79942433-cf13-432a-ae35-76cf688e4dec from this chassis (sb_readonly=0)
Feb 02 11:36:02 compute-0 ovn_controller[154901]: 2026-02-02T11:36:02Z|00036|binding|INFO|Setting lport 79942433-cf13-432a-ae35-76cf688e4dec down in Southbound
Feb 02 11:36:02 compute-0 ovn_controller[154901]: 2026-02-02T11:36:02Z|00037|binding|INFO|Removing iface tap79942433-cf ovn-installed in OVS
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.484 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.485 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.488 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.499 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:87:ce 10.100.0.5'], port_security=['fa:16:3e:5a:87:ce 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '773f45b2-ee63-471e-8884-36748ebdf289', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35ed898d-7ce9-4d75-8e38-edd8fff50f91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '4', 'neutron:security_group_ids': '01599e6b-62e6-4447-b9b4-d21cf787b38f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06475576-b5d6-42de-962d-af0a5fd97532, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=79942433-cf13-432a-ae35-76cf688e4dec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.499 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.501 165304 INFO neutron.agent.ovn.metadata.agent [-] Port 79942433-cf13-432a-ae35-76cf688e4dec in datapath 35ed898d-7ce9-4d75-8e38-edd8fff50f91 unbound from our chassis
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.503 165304 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35ed898d-7ce9-4d75-8e38-edd8fff50f91, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.505 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[12eb70cb-cbf2-47da-8696-68d16b7b4807]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.505 165304 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91 namespace which is not needed anymore
Feb 02 11:36:02 compute-0 ceph-mon[74676]: pgmap v728: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 111 KiB/s wr, 48 op/s
Feb 02 11:36:02 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Feb 02 11:36:02 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 14.920s CPU time.
Feb 02 11:36:02 compute-0 systemd-machined[218018]: Machine qemu-1-instance-00000001 terminated.
Feb 02 11:36:02 compute-0 neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91[258480]: [NOTICE]   (258484) : haproxy version is 2.8.14-c23fe91
Feb 02 11:36:02 compute-0 neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91[258480]: [NOTICE]   (258484) : path to executable is /usr/sbin/haproxy
Feb 02 11:36:02 compute-0 neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91[258480]: [WARNING]  (258484) : Exiting Master process...
Feb 02 11:36:02 compute-0 neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91[258480]: [WARNING]  (258484) : Exiting Master process...
Feb 02 11:36:02 compute-0 neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91[258480]: [ALERT]    (258484) : Current worker (258486) exited with code 143 (Terminated)
Feb 02 11:36:02 compute-0 neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91[258480]: [WARNING]  (258484) : All workers exited. Exiting... (0)
Feb 02 11:36:02 compute-0 systemd[1]: libpod-f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb.scope: Deactivated successfully.
Feb 02 11:36:02 compute-0 NetworkManager[49067]: <info>  [1770032162.6417] manager: (tap79942433-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Feb 02 11:36:02 compute-0 podman[259337]: 2026-02-02 11:36:02.642205125 +0000 UTC m=+0.044892718 container died f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:36:02 compute-0 systemd-udevd[259299]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:36:02 compute-0 podman[259334]: 2026-02-02 11:36:02.650197704 +0000 UTC m=+0.053350120 container create 004fae915a856afcc74863b2e9d9796d2f5761b8c19c520224f232017fd69daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hamilton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.672 251294 INFO nova.virt.libvirt.driver [-] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Instance destroyed successfully.
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.674 251294 DEBUG nova.objects.instance [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'resources' on Instance uuid 773f45b2-ee63-471e-8884-36748ebdf289 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:36:02 compute-0 systemd[1]: Started libpod-conmon-004fae915a856afcc74863b2e9d9796d2f5761b8c19c520224f232017fd69daa.scope.
Feb 02 11:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a511a25c4d7f15a54c84fa3428ece3f13e542f18283407bd64a1a8a9b5022ad-merged.mount: Deactivated successfully.
Feb 02 11:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb-userdata-shm.mount: Deactivated successfully.
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.697 251294 DEBUG nova.virt.libvirt.vif [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:34:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1706655855',display_name='tempest-TestNetworkBasicOps-server-1706655855',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1706655855',id=1,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKJILa1pJ0lMUnAszlbq6jqwpvMTgRVY/oBl6jvDH3JNgZy2n8zWBmgqZ4xT99avbs9P9LPHO41NiPnKt5YdAc9UIfIeWurJQe6O/sIxYJXCVhMCy77X9aVAhJ9Pdy3RPA==',key_name='tempest-TestNetworkBasicOps-866424295',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:34:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-j6d8awkp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:34:57Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=773f45b2-ee63-471e-8884-36748ebdf289,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.698 251294 DEBUG nova.network.os_vif_util [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.699 251294 DEBUG nova.network.os_vif_util [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5a:87:ce,bridge_name='br-int',has_traffic_filtering=True,id=79942433-cf13-432a-ae35-76cf688e4dec,network=Network(35ed898d-7ce9-4d75-8e38-edd8fff50f91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79942433-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.699 251294 DEBUG os_vif [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:87:ce,bridge_name='br-int',has_traffic_filtering=True,id=79942433-cf13-432a-ae35-76cf688e4dec,network=Network(35ed898d-7ce9-4d75-8e38-edd8fff50f91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79942433-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.702 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.702 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79942433-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.706 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.710 251294 INFO os_vif [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:87:ce,bridge_name='br-int',has_traffic_filtering=True,id=79942433-cf13-432a-ae35-76cf688e4dec,network=Network(35ed898d-7ce9-4d75-8e38-edd8fff50f91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79942433-cf')
Feb 02 11:36:02 compute-0 podman[259337]: 2026-02-02 11:36:02.717526865 +0000 UTC m=+0.120214438 container cleanup f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:36:02 compute-0 podman[259334]: 2026-02-02 11:36:02.627141153 +0000 UTC m=+0.030293589 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:36:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:36:02 compute-0 systemd[1]: libpod-conmon-f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb.scope: Deactivated successfully.
Feb 02 11:36:02 compute-0 podman[259334]: 2026-02-02 11:36:02.750410708 +0000 UTC m=+0.153563144 container init 004fae915a856afcc74863b2e9d9796d2f5761b8c19c520224f232017fd69daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:36:02 compute-0 podman[259334]: 2026-02-02 11:36:02.760391064 +0000 UTC m=+0.163543480 container start 004fae915a856afcc74863b2e9d9796d2f5761b8c19c520224f232017fd69daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hamilton, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:36:02 compute-0 gracious_hamilton[259381]: 167 167
Feb 02 11:36:02 compute-0 systemd[1]: libpod-004fae915a856afcc74863b2e9d9796d2f5761b8c19c520224f232017fd69daa.scope: Deactivated successfully.
Feb 02 11:36:02 compute-0 podman[259334]: 2026-02-02 11:36:02.769838105 +0000 UTC m=+0.172990531 container attach 004fae915a856afcc74863b2e9d9796d2f5761b8c19c520224f232017fd69daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:36:02 compute-0 podman[259334]: 2026-02-02 11:36:02.771117071 +0000 UTC m=+0.174269487 container died 004fae915a856afcc74863b2e9d9796d2f5761b8c19c520224f232017fd69daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e57465798e3c65e8a458e5d18be0e60c77cc6b37ac5f2a758109c5f5b037b6c-merged.mount: Deactivated successfully.
Feb 02 11:36:02 compute-0 podman[259395]: 2026-02-02 11:36:02.81711865 +0000 UTC m=+0.074870647 container remove f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.823 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0b1790-4e4d-42ca-8894-4a56a7b4d89f]: (4, ('Mon Feb  2 11:36:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91 (f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb)\nf41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb\nMon Feb  2 11:36:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91 (f41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb)\nf41150690a2b64362863a41704730b877562a83db1ca427d2268259ebc281fdb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.826 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[487a9f73-1a72-4187-97b6-3c501f31372d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.827 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35ed898d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.830 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:02 compute-0 podman[259334]: 2026-02-02 11:36:02.835388674 +0000 UTC m=+0.238541090 container remove 004fae915a856afcc74863b2e9d9796d2f5761b8c19c520224f232017fd69daa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:36:02 compute-0 kernel: tap35ed898d-70: left promiscuous mode
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.836 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.839 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:02 compute-0 systemd[1]: libpod-conmon-004fae915a856afcc74863b2e9d9796d2f5761b8c19c520224f232017fd69daa.scope: Deactivated successfully.
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.843 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[541f410c-ef79-498d-b60d-e22cd81572a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.863 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[a9bc545a-49f2-4c5e-8a07-d8db7a770528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.865 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[07b62727-9f63-417a-9410-c06441b68278]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.875 251294 DEBUG nova.compute.manager [req-79183d16-7666-42d3-945a-03e8dc742bc6 req-fc9f045a-dbb5-4ca0-91d1-dc596a7c7d34 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received event network-vif-unplugged-79942433-cf13-432a-ae35-76cf688e4dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.875 251294 DEBUG oslo_concurrency.lockutils [req-79183d16-7666-42d3-945a-03e8dc742bc6 req-fc9f045a-dbb5-4ca0-91d1-dc596a7c7d34 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "773f45b2-ee63-471e-8884-36748ebdf289-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.876 251294 DEBUG oslo_concurrency.lockutils [req-79183d16-7666-42d3-945a-03e8dc742bc6 req-fc9f045a-dbb5-4ca0-91d1-dc596a7c7d34 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.876 251294 DEBUG oslo_concurrency.lockutils [req-79183d16-7666-42d3-945a-03e8dc742bc6 req-fc9f045a-dbb5-4ca0-91d1-dc596a7c7d34 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.876 251294 DEBUG nova.compute.manager [req-79183d16-7666-42d3-945a-03e8dc742bc6 req-fc9f045a-dbb5-4ca0-91d1-dc596a7c7d34 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] No waiting events found dispatching network-vif-unplugged-79942433-cf13-432a-ae35-76cf688e4dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:36:02 compute-0 nova_compute[251290]: 2026-02-02 11:36:02.876 251294 DEBUG nova.compute.manager [req-79183d16-7666-42d3-945a-03e8dc742bc6 req-fc9f045a-dbb5-4ca0-91d1-dc596a7c7d34 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received event network-vif-unplugged-79942433-cf13-432a-ae35-76cf688e4dec for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.884 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[63c19328-020c-47c5-903a-0e6eecb3911f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 375492, 'reachable_time': 36405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259434, 'error': None, 'target': 'ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.895 165875 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35ed898d-7ce9-4d75-8e38-edd8fff50f91 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 11:36:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:02.896 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[75e59505-6292-4ec4-87de-c2f30c274a2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d35ed898d\x2d7ce9\x2d4d75\x2d8e38\x2dedd8fff50f91.mount: Deactivated successfully.
Feb 02 11:36:03 compute-0 podman[259442]: 2026-02-02 11:36:02.999590742 +0000 UTC m=+0.050342894 container create 4034dc8dad7264104e0742bf4771fbfc41e769e0198cc704615c33a7fc8fe9f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:36:03 compute-0 systemd[1]: Started libpod-conmon-4034dc8dad7264104e0742bf4771fbfc41e769e0198cc704615c33a7fc8fe9f6.scope.
Feb 02 11:36:03 compute-0 podman[259442]: 2026-02-02 11:36:02.977685384 +0000 UTC m=+0.028437546 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:36:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa18535aec9b19697a359d97e95e8763faff03d1d4bf73f67b63f5830bfcfb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa18535aec9b19697a359d97e95e8763faff03d1d4bf73f67b63f5830bfcfb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa18535aec9b19697a359d97e95e8763faff03d1d4bf73f67b63f5830bfcfb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa18535aec9b19697a359d97e95e8763faff03d1d4bf73f67b63f5830bfcfb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b1bf5d0 =====
Feb 02 11:36:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000057s ======
Feb 02 11:36:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b1bf5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:03.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Feb 02 11:36:03 compute-0 radosgw[89826]: beast: 0x7fe00b1bf5d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:03.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:03 compute-0 podman[259442]: 2026-02-02 11:36:03.118147411 +0000 UTC m=+0.168899583 container init 4034dc8dad7264104e0742bf4771fbfc41e769e0198cc704615c33a7fc8fe9f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:36:03 compute-0 podman[259442]: 2026-02-02 11:36:03.126944893 +0000 UTC m=+0.177697035 container start 4034dc8dad7264104e0742bf4771fbfc41e769e0198cc704615c33a7fc8fe9f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:36:03 compute-0 podman[259442]: 2026-02-02 11:36:03.133894792 +0000 UTC m=+0.184646934 container attach 4034dc8dad7264104e0742bf4771fbfc41e769e0198cc704615c33a7fc8fe9f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:36:03 compute-0 nova_compute[251290]: 2026-02-02 11:36:03.290 251294 INFO nova.virt.libvirt.driver [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Deleting instance files /var/lib/nova/instances/773f45b2-ee63-471e-8884-36748ebdf289_del
Feb 02 11:36:03 compute-0 nova_compute[251290]: 2026-02-02 11:36:03.292 251294 INFO nova.virt.libvirt.driver [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Deletion of /var/lib/nova/instances/773f45b2-ee63-471e-8884-36748ebdf289_del complete
Feb 02 11:36:03 compute-0 nova_compute[251290]: 2026-02-02 11:36:03.363 251294 DEBUG nova.virt.libvirt.host [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Feb 02 11:36:03 compute-0 nova_compute[251290]: 2026-02-02 11:36:03.365 251294 INFO nova.virt.libvirt.host [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] UEFI support detected
Feb 02 11:36:03 compute-0 nova_compute[251290]: 2026-02-02 11:36:03.368 251294 INFO nova.compute.manager [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Took 0.95 seconds to destroy the instance on the hypervisor.
Feb 02 11:36:03 compute-0 nova_compute[251290]: 2026-02-02 11:36:03.370 251294 DEBUG oslo.service.loopingcall [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 11:36:03 compute-0 nova_compute[251290]: 2026-02-02 11:36:03.370 251294 DEBUG nova.compute.manager [-] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 11:36:03 compute-0 nova_compute[251290]: 2026-02-02 11:36:03.370 251294 DEBUG nova.network.neutron [-] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 11:36:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:03 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:03 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb 02 11:36:03 compute-0 lvm[259533]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:36:03 compute-0 lvm[259533]: VG ceph_vg0 finished
Feb 02 11:36:03 compute-0 eager_moser[259459]: {}
Feb 02 11:36:03 compute-0 systemd[1]: libpod-4034dc8dad7264104e0742bf4771fbfc41e769e0198cc704615c33a7fc8fe9f6.scope: Deactivated successfully.
Feb 02 11:36:03 compute-0 podman[259442]: 2026-02-02 11:36:03.835032776 +0000 UTC m=+0.885784928 container died 4034dc8dad7264104e0742bf4771fbfc41e769e0198cc704615c33a7fc8fe9f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:36:03 compute-0 systemd[1]: libpod-4034dc8dad7264104e0742bf4771fbfc41e769e0198cc704615c33a7fc8fe9f6.scope: Consumed 1.055s CPU time.
Feb 02 11:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-faa18535aec9b19697a359d97e95e8763faff03d1d4bf73f67b63f5830bfcfb9-merged.mount: Deactivated successfully.
Feb 02 11:36:03 compute-0 podman[259442]: 2026-02-02 11:36:03.897285621 +0000 UTC m=+0.948037773 container remove 4034dc8dad7264104e0742bf4771fbfc41e769e0198cc704615c33a7fc8fe9f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:36:03 compute-0 systemd[1]: libpod-conmon-4034dc8dad7264104e0742bf4771fbfc41e769e0198cc704615c33a7fc8fe9f6.scope: Deactivated successfully.
Feb 02 11:36:03 compute-0 sudo[259248]: pam_unix(sudo:session): session closed for user root
Feb 02 11:36:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:36:03 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:36:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:36:03 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:36:04 compute-0 sudo[259550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:36:04 compute-0 sudo[259550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:36:04 compute-0 sudo[259550]: pam_unix(sudo:session): session closed for user root
Feb 02 11:36:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:04 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 17 KiB/s wr, 33 op/s
Feb 02 11:36:04 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:04 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.470 251294 DEBUG nova.network.neutron [-] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.493 251294 INFO nova.compute.manager [-] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Took 1.12 seconds to deallocate network for instance.
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.537 251294 DEBUG nova.network.neutron [req-6f48f215-bd7f-449c-8a79-e39b8ea57e76 req-7dad5728-eb52-48f0-9309-2e24c76da238 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updated VIF entry in instance network info cache for port 79942433-cf13-432a-ae35-76cf688e4dec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.538 251294 DEBUG nova.network.neutron [req-6f48f215-bd7f-449c-8a79-e39b8ea57e76 req-7dad5728-eb52-48f0-9309-2e24c76da238 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updating instance_info_cache with network_info: [{"id": "79942433-cf13-432a-ae35-76cf688e4dec", "address": "fa:16:3e:5a:87:ce", "network": {"id": "35ed898d-7ce9-4d75-8e38-edd8fff50f91", "bridge": "br-int", "label": "tempest-network-smoke--1518429721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79942433-cf", "ovs_interfaceid": "79942433-cf13-432a-ae35-76cf688e4dec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.585 251294 DEBUG oslo_concurrency.lockutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.586 251294 DEBUG oslo_concurrency.lockutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.587 251294 DEBUG oslo_concurrency.lockutils [req-6f48f215-bd7f-449c-8a79-e39b8ea57e76 req-7dad5728-eb52-48f0-9309-2e24c76da238 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-773f45b2-ee63-471e-8884-36748ebdf289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.644 251294 DEBUG oslo_concurrency.processutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:36:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:36:04 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:36:04 compute-0 ceph-mon[74676]: pgmap v729: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 17 KiB/s wr, 33 op/s
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.965 251294 DEBUG nova.compute.manager [req-26709aae-7e21-4ebf-9815-7293065dd6e3 req-3f7cbf4c-f86c-43cb-94ce-228152851213 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received event network-vif-plugged-79942433-cf13-432a-ae35-76cf688e4dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.966 251294 DEBUG oslo_concurrency.lockutils [req-26709aae-7e21-4ebf-9815-7293065dd6e3 req-3f7cbf4c-f86c-43cb-94ce-228152851213 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "773f45b2-ee63-471e-8884-36748ebdf289-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.967 251294 DEBUG oslo_concurrency.lockutils [req-26709aae-7e21-4ebf-9815-7293065dd6e3 req-3f7cbf4c-f86c-43cb-94ce-228152851213 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.967 251294 DEBUG oslo_concurrency.lockutils [req-26709aae-7e21-4ebf-9815-7293065dd6e3 req-3f7cbf4c-f86c-43cb-94ce-228152851213 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.967 251294 DEBUG nova.compute.manager [req-26709aae-7e21-4ebf-9815-7293065dd6e3 req-3f7cbf4c-f86c-43cb-94ce-228152851213 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] No waiting events found dispatching network-vif-plugged-79942433-cf13-432a-ae35-76cf688e4dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.967 251294 WARNING nova.compute.manager [req-26709aae-7e21-4ebf-9815-7293065dd6e3 req-3f7cbf4c-f86c-43cb-94ce-228152851213 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received unexpected event network-vif-plugged-79942433-cf13-432a-ae35-76cf688e4dec for instance with vm_state deleted and task_state None.
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.968 251294 DEBUG nova.compute.manager [req-26709aae-7e21-4ebf-9815-7293065dd6e3 req-3f7cbf4c-f86c-43cb-94ce-228152851213 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Received event network-vif-deleted-79942433-cf13-432a-ae35-76cf688e4dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.968 251294 INFO nova.compute.manager [req-26709aae-7e21-4ebf-9815-7293065dd6e3 req-3f7cbf4c-f86c-43cb-94ce-228152851213 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Neutron deleted interface 79942433-cf13-432a-ae35-76cf688e4dec; detaching it from the instance and deleting it from the info cache
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.968 251294 DEBUG nova.network.neutron [req-26709aae-7e21-4ebf-9815-7293065dd6e3 req-3f7cbf4c-f86c-43cb-94ce-228152851213 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:36:04 compute-0 nova_compute[251290]: 2026-02-02 11:36:04.990 251294 DEBUG nova.compute.manager [req-26709aae-7e21-4ebf-9815-7293065dd6e3 req-3f7cbf4c-f86c-43cb-94ce-228152851213 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Detach interface failed, port_id=79942433-cf13-432a-ae35-76cf688e4dec, reason: Instance 773f45b2-ee63-471e-8884-36748ebdf289 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 02 11:36:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:36:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776432658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:05.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:05.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:05 compute-0 nova_compute[251290]: 2026-02-02 11:36:05.105 251294 DEBUG oslo_concurrency.processutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:36:05 compute-0 nova_compute[251290]: 2026-02-02 11:36:05.111 251294 DEBUG nova.compute.provider_tree [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:36:05 compute-0 nova_compute[251290]: 2026-02-02 11:36:05.126 251294 DEBUG nova.scheduler.client.report [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:36:05 compute-0 nova_compute[251290]: 2026-02-02 11:36:05.146 251294 DEBUG oslo_concurrency.lockutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:05 compute-0 nova_compute[251290]: 2026-02-02 11:36:05.177 251294 INFO nova.scheduler.client.report [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Deleted allocations for instance 773f45b2-ee63-471e-8884-36748ebdf289
Feb 02 11:36:05 compute-0 nova_compute[251290]: 2026-02-02 11:36:05.235 251294 DEBUG oslo_concurrency.lockutils [None req-b4d8dd5b-3976-4ace-945b-0a94f605e20c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "773f45b2-ee63-471e-8884-36748ebdf289" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:05 compute-0 nova_compute[251290]: 2026-02-02 11:36:05.416 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:05 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:05 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:05 compute-0 sudo[259598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:36:05 compute-0 sudo[259598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:36:05 compute-0 sudo[259598]: pam_unix(sudo:session): session closed for user root
Feb 02 11:36:05 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3776432658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:06 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 18 KiB/s wr, 61 op/s
Feb 02 11:36:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:06 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:06 compute-0 ceph-mon[74676]: pgmap v730: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 18 KiB/s wr, 61 op/s
Feb 02 11:36:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:06] "GET /metrics HTTP/1.1" 200 48386 "" "Prometheus/2.51.0"
Feb 02 11:36:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:06] "GET /metrics HTTP/1.1" 200 48386 "" "Prometheus/2.51.0"
Feb 02 11:36:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:07.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:07.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:36:07.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:36:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113607 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb 02 11:36:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:07 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:07 compute-0 nova_compute[251290]: 2026-02-02 11:36:07.705 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:08 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:08 compute-0 podman[259628]: 2026-02-02 11:36:08.275832735 +0000 UTC m=+0.062891844 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:36:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 58 op/s
Feb 02 11:36:08 compute-0 podman[259629]: 2026-02-02 11:36:08.303636212 +0000 UTC m=+0.088008964 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 02 11:36:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:08 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:36:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:09.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:36:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:09.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:09 compute-0 ceph-mon[74676]: pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 58 op/s
Feb 02 11:36:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:09 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:09 compute-0 nova_compute[251290]: 2026-02-02 11:36:09.859 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:09 compute-0 nova_compute[251290]: 2026-02-02 11:36:09.884 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:10 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a7c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 58 op/s
Feb 02 11:36:10 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:10 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:10 compute-0 nova_compute[251290]: 2026-02-02 11:36:10.417 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:11.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:11.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:11 compute-0 ceph-mon[74676]: pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 58 op/s
Feb 02 11:36:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:11 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:12 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 58 op/s
Feb 02 11:36:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:12 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:12 compute-0 nova_compute[251290]: 2026-02-02 11:36:12.709 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:13.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:13.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:13 compute-0 ceph-mon[74676]: pgmap v733: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 58 op/s
Feb 02 11:36:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:13 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:14 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 02 11:36:14 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:14 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:36:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:36:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:15.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:15.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:15 compute-0 ceph-mon[74676]: pgmap v734: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 02 11:36:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:36:15 compute-0 nova_compute[251290]: 2026-02-02 11:36:15.418 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:15 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:15 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0044002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:16 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0040004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 11:36:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:16 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:16] "GET /metrics HTTP/1.1" 200 48366 "" "Prometheus/2.51.0"
Feb 02 11:36:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:16] "GET /metrics HTTP/1.1" 200 48366 "" "Prometheus/2.51.0"
Feb 02 11:36:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:17.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:36:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:17.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:36:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:36:17.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:36:17 compute-0 ceph-mon[74676]: pgmap v735: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 11:36:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:17 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:17 compute-0 nova_compute[251290]: 2026-02-02 11:36:17.666 251294 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770032162.6646707, 773f45b2-ee63-471e-8884-36748ebdf289 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:36:17 compute-0 nova_compute[251290]: 2026-02-02 11:36:17.667 251294 INFO nova.compute.manager [-] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] VM Stopped (Lifecycle Event)
Feb 02 11:36:17 compute-0 nova_compute[251290]: 2026-02-02 11:36:17.698 251294 DEBUG nova.compute.manager [None req-557b37da-3ea4-4f7f-a8e6-775814f8df43 - - - - - -] [instance: 773f45b2-ee63-471e-8884-36748ebdf289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:36:17 compute-0 nova_compute[251290]: 2026-02-02 11:36:17.711 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:18 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:18 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113618 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:36:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:19.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:19.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:19 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:19 compute-0 ceph-mon[74676]: pgmap v736: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:20 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:20 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:20 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:20 compute-0 nova_compute[251290]: 2026-02-02 11:36:20.421 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:21.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:21.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:21 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:21 compute-0 ceph-mon[74676]: pgmap v737: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:22 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:22 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:22 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:22.671 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:22.671 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:22.671 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:22 compute-0 nova_compute[251290]: 2026-02-02 11:36:22.713 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:36:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:23.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:36:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:23.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:23 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:23 compute-0 ceph-mon[74676]: pgmap v738: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:24 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:24 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:24 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000057s ======
Feb 02 11:36:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Feb 02 11:36:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:25.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:25 compute-0 nova_compute[251290]: 2026-02-02 11:36:25.423 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:25 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:25 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:25 compute-0 ceph-mon[74676]: pgmap v739: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:25 compute-0 sudo[259696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:36:25 compute-0 sudo[259696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:36:25 compute-0 sudo[259696]: pam_unix(sudo:session): session closed for user root
Feb 02 11:36:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:26 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.134 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.135 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.154 251294 DEBUG nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.239 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.239 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.246 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.246 251294 INFO nova.compute.claims [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Claim successful on node compute-0.ctlplane.example.com
Feb 02 11:36:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.382 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:36:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:26 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:26 compute-0 ceph-mon[74676]: pgmap v740: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb 02 11:36:26 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1976816185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:36:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2600039199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.843 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.849 251294 DEBUG nova.compute.provider_tree [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.867 251294 DEBUG nova.scheduler.client.report [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.892 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.893 251294 DEBUG nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.938 251294 DEBUG nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.939 251294 DEBUG nova.network.neutron [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.957 251294 INFO nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 11:36:26 compute-0 nova_compute[251290]: 2026-02-02 11:36:26.990 251294 DEBUG nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 11:36:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:26] "GET /metrics HTTP/1.1" 200 48366 "" "Prometheus/2.51.0"
Feb 02 11:36:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:26] "GET /metrics HTTP/1.1" 200 48366 "" "Prometheus/2.51.0"
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.089 251294 DEBUG nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.090 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.091 251294 INFO nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Creating image(s)
Feb 02 11:36:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:27.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:36:27.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:36:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:27.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.124 251294 DEBUG nova.storage.rbd_utils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 1cd1ff52-5053-47d8-96b1-171866a19914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.156 251294 DEBUG nova.storage.rbd_utils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 1cd1ff52-5053-47d8-96b1-171866a19914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.189 251294 DEBUG nova.storage.rbd_utils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 1cd1ff52-5053-47d8-96b1-171866a19914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.193 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.251 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.252 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.253 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.253 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.281 251294 DEBUG nova.storage.rbd_utils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 1cd1ff52-5053-47d8-96b1-171866a19914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.286 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 1cd1ff52-5053-47d8-96b1-171866a19914_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:36:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:27 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.523 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 1cd1ff52-5053-47d8-96b1-171866a19914_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:36:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2600039199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3771876077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.594 251294 DEBUG nova.storage.rbd_utils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] resizing rbd image 1cd1ff52-5053-47d8-96b1-171866a19914_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.688 251294 DEBUG nova.objects.instance [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'migration_context' on Instance uuid 1cd1ff52-5053-47d8-96b1-171866a19914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.692 251294 DEBUG nova.policy [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abee87546a344ef285e2e269d2c74792', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3240aa599bd249a3b72e42fcc47af557', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.708 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.708 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Ensure instance console log exists: /var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.709 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.709 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.709 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:27 compute-0 nova_compute[251290]: 2026-02-02 11:36:27.715 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:28 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:28 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:28 compute-0 ceph-mon[74676]: pgmap v741: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:29 compute-0 nova_compute[251290]: 2026-02-02 11:36:29.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:36:29 compute-0 nova_compute[251290]: 2026-02-02 11:36:29.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:36:29 compute-0 nova_compute[251290]: 2026-02-02 11:36:29.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:36:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:29.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:29.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:29 compute-0 nova_compute[251290]: 2026-02-02 11:36:29.255 251294 DEBUG nova.network.neutron [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Successfully created port: 281a7e60-30d1-4ce3-825e-626d8446b90a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 11:36:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:29 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0038001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:36:29
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images', 'volumes', 'vms', 'backups', '.mgr', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.meta']
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:36:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:36:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:36:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:36:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:36:30 compute-0 nova_compute[251290]: 2026-02-02 11:36:30.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:36:30 compute-0 nova_compute[251290]: 2026-02-02 11:36:30.047 251294 DEBUG nova.network.neutron [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Successfully updated port: 281a7e60-30d1-4ce3-825e-626d8446b90a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 11:36:30 compute-0 nova_compute[251290]: 2026-02-02 11:36:30.066 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:36:30 compute-0 nova_compute[251290]: 2026-02-02 11:36:30.067 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquired lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:36:30 compute-0 nova_compute[251290]: 2026-02-02 11:36:30.067 251294 DEBUG nova.network.neutron [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 11:36:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:30 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f006c00a930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:30 compute-0 nova_compute[251290]: 2026-02-02 11:36:30.142 251294 DEBUG nova.compute.manager [req-895c6e38-9e6f-4aa8-ab9a-e707692ae932 req-070340ef-82d3-4101-82ab-e1258345bf4b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-changed-281a7e60-30d1-4ce3-825e-626d8446b90a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:36:30 compute-0 nova_compute[251290]: 2026-02-02 11:36:30.143 251294 DEBUG nova.compute.manager [req-895c6e38-9e6f-4aa8-ab9a-e707692ae932 req-070340ef-82d3-4101-82ab-e1258345bf4b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Refreshing instance network info cache due to event network-changed-281a7e60-30d1-4ce3-825e-626d8446b90a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:36:30 compute-0 nova_compute[251290]: 2026-02-02 11:36:30.143 251294 DEBUG oslo_concurrency.lockutils [req-895c6e38-9e6f-4aa8-ab9a-e707692ae932 req-070340ef-82d3-4101-82ab-e1258345bf4b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:36:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:30 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:30 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0048002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:30 compute-0 nova_compute[251290]: 2026-02-02 11:36:30.425 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:30 compute-0 ceph-mon[74676]: pgmap v742: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb 02 11:36:30 compute-0 nova_compute[251290]: 2026-02-02 11:36:30.670 251294 DEBUG nova.network.neutron [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.044 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.044 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.044 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.045 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.045 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:36:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:31.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:36:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:31.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:36:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:31 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:36:31 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971348903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.517 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:36:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.600254) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032191600436, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1108, "num_deletes": 251, "total_data_size": 1933954, "memory_usage": 1956912, "flush_reason": "Manual Compaction"}
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032191612027, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1216250, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22123, "largest_seqno": 23230, "table_properties": {"data_size": 1211957, "index_size": 1817, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11355, "raw_average_key_size": 20, "raw_value_size": 1202567, "raw_average_value_size": 2174, "num_data_blocks": 79, "num_entries": 553, "num_filter_entries": 553, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032095, "oldest_key_time": 1770032095, "file_creation_time": 1770032191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 11731 microseconds, and 3210 cpu microseconds.
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.612104) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1216250 bytes OK
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.612131) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.613577) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.613640) EVENT_LOG_v1 {"time_micros": 1770032191613633, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.613670) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1928851, prev total WAL file size 1928851, number of live WAL files 2.
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.614317) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1187KB)], [47(14MB)]
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032191614379, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16107179, "oldest_snapshot_seqno": -1}
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.689 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.690 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4573MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.691 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.691 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.693 251294 DEBUG nova.network.neutron [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updating instance_info_cache with network_info: [{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5531 keys, 12677005 bytes, temperature: kUnknown
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032191740236, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12677005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12641112, "index_size": 20935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 139653, "raw_average_key_size": 25, "raw_value_size": 12542242, "raw_average_value_size": 2267, "num_data_blocks": 854, "num_entries": 5531, "num_filter_entries": 5531, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770032191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.740897) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12677005 bytes
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.742256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.7 rd, 100.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.2 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(23.7) write-amplify(10.4) OK, records in: 6011, records dropped: 480 output_compression: NoCompression
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.742292) EVENT_LOG_v1 {"time_micros": 1770032191742278, "job": 24, "event": "compaction_finished", "compaction_time_micros": 126101, "compaction_time_cpu_micros": 24485, "output_level": 6, "num_output_files": 1, "total_output_size": 12677005, "num_input_records": 6011, "num_output_records": 5531, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:36:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3971348903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032191743335, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032191745581, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.614205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.745675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.745682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.745684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.745686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:36:31 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:36:31.745687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.748 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Releasing lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.748 251294 DEBUG nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Instance network_info: |[{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.749 251294 DEBUG oslo_concurrency.lockutils [req-895c6e38-9e6f-4aa8-ab9a-e707692ae932 req-070340ef-82d3-4101-82ab-e1258345bf4b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.750 251294 DEBUG nova.network.neutron [req-895c6e38-9e6f-4aa8-ab9a-e707692ae932 req-070340ef-82d3-4101-82ab-e1258345bf4b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Refreshing network info cache for port 281a7e60-30d1-4ce3-825e-626d8446b90a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.752 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Start _get_guest_xml network_info=[{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:33:57Z,direct_url=<?>,disk_format='qcow2',id=8a4b36bd-584f-4a0a-aab3-55c0b12d2d97,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='298a2ae7f4e04d87bebf3a1c7834ef26',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:34:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '8a4b36bd-584f-4a0a-aab3-55c0b12d2d97'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.760 251294 WARNING nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.764 251294 DEBUG nova.virt.libvirt.host [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.765 251294 DEBUG nova.virt.libvirt.host [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.769 251294 DEBUG nova.virt.libvirt.host [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.769 251294 DEBUG nova.virt.libvirt.host [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.770 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.770 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:33:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5413fce8-24ad-46a1-a21e-000a8299c8f6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:33:57Z,direct_url=<?>,disk_format='qcow2',id=8a4b36bd-584f-4a0a-aab3-55c0b12d2d97,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='298a2ae7f4e04d87bebf3a1c7834ef26',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:34:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.770 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.771 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.771 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.771 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.771 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.771 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.772 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.772 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.772 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.772 251294 DEBUG nova.virt.hardware [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.775 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.830 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Instance 1cd1ff52-5053-47d8-96b1-171866a19914 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.830 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.831 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:36:31 compute-0 nova_compute[251290]: 2026-02-02 11:36:31.872 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:36:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:32 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb 02 11:36:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 11:36:32 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2325016401' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.289 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:36:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.322 251294 DEBUG nova.storage.rbd_utils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 1cd1ff52-5053-47d8-96b1-171866a19914_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.329 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:36:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:36:32 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890883994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.357 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.362 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.381 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:36:32 compute-0 kernel: ganesha.nfsd[258402]: segfault at 50 ip 00007f00ef94a32e sp 00007f0074ff8210 error 4 in libntirpc.so.5.8[7f00ef92f000+2c000] likely on CPU 1 (core 0, socket 1)
Feb 02 11:36:32 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb 02 11:36:32 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[256072]: 02/02/2026 11:36:32 : epoch 69808b82 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0058004080 fd 38 proxy ignored for local
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.410 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.411 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:32 compute-0 systemd[1]: Started Process Core Dump (PID 260002/UID 0).
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.717 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/877056960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2325016401' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:36:32 compute-0 ceph-mon[74676]: pgmap v743: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 02 11:36:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/890883994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2870422702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:36:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 11:36:32 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605844131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.838 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.840 251294 DEBUG nova.virt.libvirt.vif [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-219986626',display_name='tempest-TestNetworkBasicOps-server-219986626',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-219986626',id=3,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALPivmHF0+GB+pjPrAwqaEZhCPK2QrIACo9j+dHanM9cfHXTkE0jGmzzKkKGhA9eeUbvbvO6MMpEC0fpPFfP8NzaoVP2iqZSTSdZJjnyGp/vR+G1xbLVIz8nFJBYnUgZg==',key_name='tempest-TestNetworkBasicOps-70721191',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-yvoq90ce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:36:27Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=1cd1ff52-5053-47d8-96b1-171866a19914,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.840 251294 DEBUG nova.network.os_vif_util [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.841 251294 DEBUG nova.network.os_vif_util [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:9b:d9,bridge_name='br-int',has_traffic_filtering=True,id=281a7e60-30d1-4ce3-825e-626d8446b90a,network=Network(c2dab3f7-3551-4121-b4ad-e3c2a2b264e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281a7e60-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.842 251294 DEBUG nova.objects.instance [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1cd1ff52-5053-47d8-96b1-171866a19914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.862 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] End _get_guest_xml xml=<domain type="kvm">
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <uuid>1cd1ff52-5053-47d8-96b1-171866a19914</uuid>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <name>instance-00000003</name>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <memory>131072</memory>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <vcpu>1</vcpu>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <nova:name>tempest-TestNetworkBasicOps-server-219986626</nova:name>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <nova:creationTime>2026-02-02 11:36:31</nova:creationTime>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <nova:flavor name="m1.nano">
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <nova:memory>128</nova:memory>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <nova:disk>1</nova:disk>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <nova:swap>0</nova:swap>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <nova:vcpus>1</nova:vcpus>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       </nova:flavor>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <nova:owner>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       </nova:owner>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <nova:ports>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <nova:port uuid="281a7e60-30d1-4ce3-825e-626d8446b90a">
Feb 02 11:36:32 compute-0 nova_compute[251290]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         </nova:port>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       </nova:ports>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     </nova:instance>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <sysinfo type="smbios">
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <system>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <entry name="manufacturer">RDO</entry>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <entry name="product">OpenStack Compute</entry>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <entry name="serial">1cd1ff52-5053-47d8-96b1-171866a19914</entry>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <entry name="uuid">1cd1ff52-5053-47d8-96b1-171866a19914</entry>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <entry name="family">Virtual Machine</entry>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     </system>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <os>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <boot dev="hd"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <smbios mode="sysinfo"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   </os>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <features>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <vmcoreinfo/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   </features>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <clock offset="utc">
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <timer name="hpet" present="no"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <cpu mode="host-model" match="exact">
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <disk type="network" device="disk">
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <driver type="raw" cache="none"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <source protocol="rbd" name="vms/1cd1ff52-5053-47d8-96b1-171866a19914_disk">
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <host name="192.168.122.100" port="6789"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <host name="192.168.122.102" port="6789"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <host name="192.168.122.101" port="6789"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       </source>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <auth username="openstack">
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <secret type="ceph" uuid="1d33f80b-d6ca-501c-bac7-184379b89279"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <target dev="vda" bus="virtio"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <disk type="network" device="cdrom">
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <driver type="raw" cache="none"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <source protocol="rbd" name="vms/1cd1ff52-5053-47d8-96b1-171866a19914_disk.config">
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <host name="192.168.122.100" port="6789"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <host name="192.168.122.102" port="6789"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <host name="192.168.122.101" port="6789"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       </source>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <auth username="openstack">
Feb 02 11:36:32 compute-0 nova_compute[251290]:         <secret type="ceph" uuid="1d33f80b-d6ca-501c-bac7-184379b89279"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <target dev="sda" bus="sata"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <interface type="ethernet">
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <mac address="fa:16:3e:25:9b:d9"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <model type="virtio"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <mtu size="1442"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <target dev="tap281a7e60-30"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <serial type="pty">
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <log file="/var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/console.log" append="off"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <video>
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <model type="virtio"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     </video>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <input type="tablet" bus="usb"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <rng model="virtio">
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <backend model="random">/dev/urandom</backend>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <controller type="usb" index="0"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     <memballoon model="virtio">
Feb 02 11:36:32 compute-0 nova_compute[251290]:       <stats period="10"/>
Feb 02 11:36:32 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:36:32 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:36:32 compute-0 nova_compute[251290]: </domain>
Feb 02 11:36:32 compute-0 nova_compute[251290]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.863 251294 DEBUG nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Preparing to wait for external event network-vif-plugged-281a7e60-30d1-4ce3-825e-626d8446b90a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.863 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.864 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.864 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.865 251294 DEBUG nova.virt.libvirt.vif [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-219986626',display_name='tempest-TestNetworkBasicOps-server-219986626',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-219986626',id=3,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALPivmHF0+GB+pjPrAwqaEZhCPK2QrIACo9j+dHanM9cfHXTkE0jGmzzKkKGhA9eeUbvbvO6MMpEC0fpPFfP8NzaoVP2iqZSTSdZJjnyGp/vR+G1xbLVIz8nFJBYnUgZg==',key_name='tempest-TestNetworkBasicOps-70721191',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-yvoq90ce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:36:27Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=1cd1ff52-5053-47d8-96b1-171866a19914,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.865 251294 DEBUG nova.network.os_vif_util [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.866 251294 DEBUG nova.network.os_vif_util [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:9b:d9,bridge_name='br-int',has_traffic_filtering=True,id=281a7e60-30d1-4ce3-825e-626d8446b90a,network=Network(c2dab3f7-3551-4121-b4ad-e3c2a2b264e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281a7e60-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.866 251294 DEBUG os_vif [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:9b:d9,bridge_name='br-int',has_traffic_filtering=True,id=281a7e60-30d1-4ce3-825e-626d8446b90a,network=Network(c2dab3f7-3551-4121-b4ad-e3c2a2b264e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281a7e60-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.867 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.867 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.868 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.872 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.873 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap281a7e60-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.874 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap281a7e60-30, col_values=(('external_ids', {'iface-id': '281a7e60-30d1-4ce3-825e-626d8446b90a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:9b:d9', 'vm-uuid': '1cd1ff52-5053-47d8-96b1-171866a19914'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:36:32 compute-0 NetworkManager[49067]: <info>  [1770032192.8776] manager: (tap281a7e60-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.878 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.882 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.883 251294 INFO os_vif [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:9b:d9,bridge_name='br-int',has_traffic_filtering=True,id=281a7e60-30d1-4ce3-825e-626d8446b90a,network=Network(c2dab3f7-3551-4121-b4ad-e3c2a2b264e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281a7e60-30')
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.927 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.928 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.928 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No VIF found with MAC fa:16:3e:25:9b:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.928 251294 INFO nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Using config drive
Feb 02 11:36:32 compute-0 nova_compute[251290]: 2026-02-02 11:36:32.960 251294 DEBUG nova.storage.rbd_utils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 1cd1ff52-5053-47d8-96b1-171866a19914_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:36:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:33.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:33.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.192 251294 DEBUG nova.network.neutron [req-895c6e38-9e6f-4aa8-ab9a-e707692ae932 req-070340ef-82d3-4101-82ab-e1258345bf4b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updated VIF entry in instance network info cache for port 281a7e60-30d1-4ce3-825e-626d8446b90a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.193 251294 DEBUG nova.network.neutron [req-895c6e38-9e6f-4aa8-ab9a-e707692ae932 req-070340ef-82d3-4101-82ab-e1258345bf4b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updating instance_info_cache with network_info: [{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.232 251294 DEBUG oslo_concurrency.lockutils [req-895c6e38-9e6f-4aa8-ab9a-e707692ae932 req-070340ef-82d3-4101-82ab-e1258345bf4b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.299 251294 INFO nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Creating config drive at /var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/disk.config
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.305 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp68fl2vir execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.411 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.412 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.412 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.412 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.432 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp68fl2vir" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.463 251294 DEBUG nova.storage.rbd_utils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 1cd1ff52-5053-47d8-96b1-171866a19914_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:36:33 compute-0 nova_compute[251290]: 2026-02-02 11:36:33.468 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/disk.config 1cd1ff52-5053-47d8-96b1-171866a19914_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:36:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 02 11:36:34 compute-0 nova_compute[251290]: 2026-02-02 11:36:34.578 251294 DEBUG oslo_concurrency.processutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/disk.config 1cd1ff52-5053-47d8-96b1-171866a19914_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:36:34 compute-0 nova_compute[251290]: 2026-02-02 11:36:34.579 251294 INFO nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Deleting local config drive /var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/disk.config because it was imported into RBD.
Feb 02 11:36:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/605844131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:36:34 compute-0 kernel: tap281a7e60-30: entered promiscuous mode
Feb 02 11:36:34 compute-0 NetworkManager[49067]: <info>  [1770032194.6279] manager: (tap281a7e60-30): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Feb 02 11:36:34 compute-0 ovn_controller[154901]: 2026-02-02T11:36:34Z|00038|binding|INFO|Claiming lport 281a7e60-30d1-4ce3-825e-626d8446b90a for this chassis.
Feb 02 11:36:34 compute-0 ovn_controller[154901]: 2026-02-02T11:36:34Z|00039|binding|INFO|281a7e60-30d1-4ce3-825e-626d8446b90a: Claiming fa:16:3e:25:9b:d9 10.100.0.5
Feb 02 11:36:34 compute-0 nova_compute[251290]: 2026-02-02 11:36:34.627 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:34 compute-0 nova_compute[251290]: 2026-02-02 11:36:34.634 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:34 compute-0 nova_compute[251290]: 2026-02-02 11:36:34.638 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:34 compute-0 systemd-udevd[260099]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:36:34 compute-0 systemd-machined[218018]: New machine qemu-2-instance-00000003.
Feb 02 11:36:34 compute-0 nova_compute[251290]: 2026-02-02 11:36:34.659 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:34 compute-0 ovn_controller[154901]: 2026-02-02T11:36:34Z|00040|binding|INFO|Setting lport 281a7e60-30d1-4ce3-825e-626d8446b90a ovn-installed in OVS
Feb 02 11:36:34 compute-0 nova_compute[251290]: 2026-02-02 11:36:34.663 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:34 compute-0 NetworkManager[49067]: <info>  [1770032194.6692] device (tap281a7e60-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 11:36:34 compute-0 NetworkManager[49067]: <info>  [1770032194.6698] device (tap281a7e60-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 11:36:34 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Feb 02 11:36:34 compute-0 ovn_controller[154901]: 2026-02-02T11:36:34Z|00041|binding|INFO|Setting lport 281a7e60-30d1-4ce3-825e-626d8446b90a up in Southbound
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.686 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:9b:d9 10.100.0.5'], port_security=['fa:16:3e:25:9b:d9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1cd1ff52-5053-47d8-96b1-171866a19914', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c79859ff-e41c-4640-9671-2a0c24d00af5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b4490b1-257b-4b94-9254-fc57212f9074, chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=281a7e60-30d1-4ce3-825e-626d8446b90a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.688 165304 INFO neutron.agent.ovn.metadata.agent [-] Port 281a7e60-30d1-4ce3-825e-626d8446b90a in datapath c2dab3f7-3551-4121-b4ad-e3c2a2b264e7 bound to our chassis
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.689 165304 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2dab3f7-3551-4121-b4ad-e3c2a2b264e7
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.702 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[164e6b73-57ca-4c24-a5f9-372fd05f7104]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.703 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc2dab3f7-31 in ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.704 258380 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc2dab3f7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.705 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[34dc27b5-5e32-497d-8705-eb831450620d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.705 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[4e26ffe8-2230-406b-870d-a1d1511a0540]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.717 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[cf91c1cd-a61c-4a86-a8d2-33e3c1790969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.810 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[a36c3287-877a-4120-ba5e-62283ecff20a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.840 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[5139a1f3-f521-401c-9940-22326236b63a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 systemd-udevd[260102]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:36:34 compute-0 NetworkManager[49067]: <info>  [1770032194.8482] manager: (tapc2dab3f7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/34)
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.847 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[038030e2-e913-4ebf-80fd-8ed90e6d403e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.876 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[6117c7b1-a271-48b7-80fe-352c44b42cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.880 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb52c24-cc72-4a04-bfaf-66c361e44bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 NetworkManager[49067]: <info>  [1770032194.9029] device (tapc2dab3f7-30): carrier: link connected
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.909 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1302b2-2bf3-4266-9189-17a8ab356c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.928 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[037a4334-2c6b-4cf3-891b-75252182f5a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2dab3f7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:13:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384916, 'reachable_time': 23661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260134, 'error': None, 'target': 'ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.945 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[76235f80-cebd-4f72-9edd-6bfe865f6920]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:13e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384916, 'tstamp': 384916}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260136, 'error': None, 'target': 'ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.963 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0d6f97-c387-49d2-a27d-10e619d70595]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2dab3f7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:13:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384916, 'reachable_time': 23661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260137, 'error': None, 'target': 'ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:34 compute-0 systemd-coredump[260003]: Process 256076 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007f00ef94a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Feb 02 11:36:34 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:34.997 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[77f9912b-d24a-4f6d-85df-bf16b6091c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.007 251294 DEBUG nova.compute.manager [req-de806908-f588-44a0-9c10-793604221e1f req-f8a26802-9872-44f4-b616-dce1fa0e76d9 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-plugged-281a7e60-30d1-4ce3-825e-626d8446b90a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.008 251294 DEBUG oslo_concurrency.lockutils [req-de806908-f588-44a0-9c10-793604221e1f req-f8a26802-9872-44f4-b616-dce1fa0e76d9 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.009 251294 DEBUG oslo_concurrency.lockutils [req-de806908-f588-44a0-9c10-793604221e1f req-f8a26802-9872-44f4-b616-dce1fa0e76d9 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.009 251294 DEBUG oslo_concurrency.lockutils [req-de806908-f588-44a0-9c10-793604221e1f req-f8a26802-9872-44f4-b616-dce1fa0e76d9 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.010 251294 DEBUG nova.compute.manager [req-de806908-f588-44a0-9c10-793604221e1f req-f8a26802-9872-44f4-b616-dce1fa0e76d9 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Processing event network-vif-plugged-281a7e60-30d1-4ce3-825e-626d8446b90a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 11:36:35 compute-0 systemd[1]: systemd-coredump@8-260002-0.service: Deactivated successfully.
Feb 02 11:36:35 compute-0 kernel: tapc2dab3f7-30: entered promiscuous mode
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.059 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef6291f-fd6f-466a-8ee3-03669c773e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.061 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2dab3f7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.061 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.061 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2dab3f7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:36:35 compute-0 NetworkManager[49067]: <info>  [1770032195.0639] manager: (tapc2dab3f7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Feb 02 11:36:35 compute-0 podman[260146]: 2026-02-02 11:36:35.064182945 +0000 UTC m=+0.033957425 container died e2adabce5391d08dfb5d3880be2f26e3feb6bb275a59e130977f9e9e900f66cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.063 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.068 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2dab3f7-30, col_values=(('external_ids', {'iface-id': 'd62f21c1-3ed5-4b7a-baaf-242cc3d5f303'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:36:35 compute-0 ovn_controller[154901]: 2026-02-02T11:36:35Z|00042|binding|INFO|Releasing lport d62f21c1-3ed5-4b7a-baaf-242cc3d5f303 from this chassis (sb_readonly=0)
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.069 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.076 165304 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c2dab3f7-3551-4121-b4ad-e3c2a2b264e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c2dab3f7-3551-4121-b4ad-e3c2a2b264e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.075 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.077 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[78f9e855-3dd2-4003-ba56-31264eb06858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.078 165304 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: global
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     log         /dev/log local0 debug
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     log-tag     haproxy-metadata-proxy-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     user        root
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     group       root
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     maxconn     1024
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     pidfile     /var/lib/neutron/external/pids/c2dab3f7-3551-4121-b4ad-e3c2a2b264e7.pid.haproxy
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     daemon
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: defaults
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     log global
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     mode http
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     option httplog
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     option dontlognull
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     option http-server-close
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     option forwardfor
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     retries                 3
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     timeout http-request    30s
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     timeout connect         30s
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     timeout client          32s
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     timeout server          32s
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     timeout http-keep-alive 30s
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: listen listener
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     bind 169.254.169.254:80
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:     http-request add-header X-OVN-Network-ID c2dab3f7-3551-4121-b4ad-e3c2a2b264e7
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.079 165304 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7', 'env', 'PROCESS_TAG=haproxy-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c2dab3f7-3551-4121-b4ad-e3c2a2b264e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 11:36:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:35.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:35.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cb30dcd659472d6294ef4bbd90be22f50f5b2c39c4a9c8dbe4121ff1109ea68-merged.mount: Deactivated successfully.
Feb 02 11:36:35 compute-0 podman[260146]: 2026-02-02 11:36:35.224170762 +0000 UTC m=+0.193945212 container remove e2adabce5391d08dfb5d3880be2f26e3feb6bb275a59e130977f9e9e900f66cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:36:35 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Main process exited, code=exited, status=139/n/a
Feb 02 11:36:35 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Failed with result 'exit-code'.
Feb 02 11:36:35 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.367s CPU time.
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.427 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:35 compute-0 podman[260212]: 2026-02-02 11:36:35.500894736 +0000 UTC m=+0.102397348 container create d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 02 11:36:35 compute-0 podman[260212]: 2026-02-02 11:36:35.426786071 +0000 UTC m=+0.028288693 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 11:36:35 compute-0 systemd[1]: Started libpod-conmon-d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19.scope.
Feb 02 11:36:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707d2eb872c12eadda9ab6371b6218e3a12d3252a21643ecfd7a346292c47122/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:35 compute-0 podman[260212]: 2026-02-02 11:36:35.631582973 +0000 UTC m=+0.233085605 container init d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 11:36:35 compute-0 podman[260212]: 2026-02-02 11:36:35.637158682 +0000 UTC m=+0.238661294 container start d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 02 11:36:35 compute-0 ceph-mon[74676]: pgmap v744: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 02 11:36:35 compute-0 neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7[260263]: [NOTICE]   (260272) : New worker (260275) forked
Feb 02 11:36:35 compute-0 neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7[260263]: [NOTICE]   (260272) : Loading success.
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.733 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.733 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.736 251294 DEBUG nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.737 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032195.7356539, 1cd1ff52-5053-47d8-96b1-171866a19914 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.737 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] VM Started (Lifecycle Event)
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.740 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 11:36:35 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:35.745 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.746 251294 INFO nova.virt.libvirt.driver [-] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Instance spawned successfully.
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.746 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.769 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.773 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.782 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.783 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.783 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.783 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.784 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.784 251294 DEBUG nova.virt.libvirt.driver [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.818 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.819 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032195.73641, 1cd1ff52-5053-47d8-96b1-171866a19914 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.819 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] VM Paused (Lifecycle Event)
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.875 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.878 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032195.740021, 1cd1ff52-5053-47d8-96b1-171866a19914 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.878 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] VM Resumed (Lifecycle Event)
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.900 251294 INFO nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Took 8.81 seconds to spawn the instance on the hypervisor.
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.900 251294 DEBUG nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.905 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.912 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.944 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.973 251294 INFO nova.compute.manager [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Took 9.76 seconds to build instance.
Feb 02 11:36:35 compute-0 nova_compute[251290]: 2026-02-02 11:36:35.997 251294 DEBUG oslo_concurrency.lockutils [None req-ad0046f4-b224-4cdf-a120-e6ceda08d720 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:36 compute-0 nova_compute[251290]: 2026-02-02 11:36:36.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:36:36 compute-0 nova_compute[251290]: 2026-02-02 11:36:36.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:36:36 compute-0 nova_compute[251290]: 2026-02-02 11:36:36.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:36:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 02 11:36:36 compute-0 nova_compute[251290]: 2026-02-02 11:36:36.344 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:36:36 compute-0 nova_compute[251290]: 2026-02-02 11:36:36.345 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquired lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:36:36 compute-0 nova_compute[251290]: 2026-02-02 11:36:36.345 251294 DEBUG nova.network.neutron [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 11:36:36 compute-0 nova_compute[251290]: 2026-02-02 11:36:36.346 251294 DEBUG nova.objects.instance [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1cd1ff52-5053-47d8-96b1-171866a19914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:36:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:36 compute-0 ceph-mon[74676]: pgmap v745: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 02 11:36:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:36] "GET /metrics HTTP/1.1" 200 48367 "" "Prometheus/2.51.0"
Feb 02 11:36:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:36] "GET /metrics HTTP/1.1" 200 48367 "" "Prometheus/2.51.0"
Feb 02 11:36:37 compute-0 nova_compute[251290]: 2026-02-02 11:36:37.081 251294 DEBUG nova.compute.manager [req-1ae7a3d1-f23f-48c0-b6c5-ea0540baa274 req-57a33bc2-a606-4265-814b-890204964174 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-plugged-281a7e60-30d1-4ce3-825e-626d8446b90a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:36:37 compute-0 nova_compute[251290]: 2026-02-02 11:36:37.082 251294 DEBUG oslo_concurrency.lockutils [req-1ae7a3d1-f23f-48c0-b6c5-ea0540baa274 req-57a33bc2-a606-4265-814b-890204964174 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:37 compute-0 nova_compute[251290]: 2026-02-02 11:36:37.082 251294 DEBUG oslo_concurrency.lockutils [req-1ae7a3d1-f23f-48c0-b6c5-ea0540baa274 req-57a33bc2-a606-4265-814b-890204964174 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:37 compute-0 nova_compute[251290]: 2026-02-02 11:36:37.082 251294 DEBUG oslo_concurrency.lockutils [req-1ae7a3d1-f23f-48c0-b6c5-ea0540baa274 req-57a33bc2-a606-4265-814b-890204964174 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:36:37 compute-0 nova_compute[251290]: 2026-02-02 11:36:37.083 251294 DEBUG nova.compute.manager [req-1ae7a3d1-f23f-48c0-b6c5-ea0540baa274 req-57a33bc2-a606-4265-814b-890204964174 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] No waiting events found dispatching network-vif-plugged-281a7e60-30d1-4ce3-825e-626d8446b90a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:36:37 compute-0 nova_compute[251290]: 2026-02-02 11:36:37.083 251294 WARNING nova.compute.manager [req-1ae7a3d1-f23f-48c0-b6c5-ea0540baa274 req-57a33bc2-a606-4265-814b-890204964174 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received unexpected event network-vif-plugged-281a7e60-30d1-4ce3-825e-626d8446b90a for instance with vm_state active and task_state None.
Feb 02 11:36:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:36:37.120Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:36:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:36:37.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:36:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:37.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:37.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:37 compute-0 nova_compute[251290]: 2026-02-02 11:36:37.913 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:37 compute-0 nova_compute[251290]: 2026-02-02 11:36:37.970 251294 DEBUG nova.network.neutron [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updating instance_info_cache with network_info: [{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:36:37 compute-0 nova_compute[251290]: 2026-02-02 11:36:37.990 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Releasing lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:36:37 compute-0 nova_compute[251290]: 2026-02-02 11:36:37.991 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 11:36:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 02 11:36:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:39.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:36:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:39.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:36:39 compute-0 podman[260288]: 2026-02-02 11:36:39.278956292 +0000 UTC m=+0.058644663 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 02 11:36:39 compute-0 podman[260289]: 2026-02-02 11:36:39.315946462 +0000 UTC m=+0.094937013 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 02 11:36:39 compute-0 ceph-mon[74676]: pgmap v746: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 02 11:36:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113639 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:36:40 compute-0 nova_compute[251290]: 2026-02-02 11:36:40.028 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:40 compute-0 NetworkManager[49067]: <info>  [1770032200.0293] manager: (patch-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Feb 02 11:36:40 compute-0 NetworkManager[49067]: <info>  [1770032200.0300] manager: (patch-br-int-to-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Feb 02 11:36:40 compute-0 ovn_controller[154901]: 2026-02-02T11:36:40Z|00043|binding|INFO|Releasing lport d62f21c1-3ed5-4b7a-baaf-242cc3d5f303 from this chassis (sb_readonly=0)
Feb 02 11:36:40 compute-0 ovn_controller[154901]: 2026-02-02T11:36:40Z|00044|binding|INFO|Releasing lport d62f21c1-3ed5-4b7a-baaf-242cc3d5f303 from this chassis (sb_readonly=0)
Feb 02 11:36:40 compute-0 nova_compute[251290]: 2026-02-02 11:36:40.034 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 02 11:36:40 compute-0 nova_compute[251290]: 2026-02-02 11:36:40.430 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:40 compute-0 nova_compute[251290]: 2026-02-02 11:36:40.478 251294 DEBUG nova.compute.manager [req-a0642e00-a8e0-4a92-bcca-c97448709a60 req-2116cfa8-abe3-4838-b826-ceae24901988 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-changed-281a7e60-30d1-4ce3-825e-626d8446b90a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:36:40 compute-0 nova_compute[251290]: 2026-02-02 11:36:40.479 251294 DEBUG nova.compute.manager [req-a0642e00-a8e0-4a92-bcca-c97448709a60 req-2116cfa8-abe3-4838-b826-ceae24901988 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Refreshing instance network info cache due to event network-changed-281a7e60-30d1-4ce3-825e-626d8446b90a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:36:40 compute-0 nova_compute[251290]: 2026-02-02 11:36:40.479 251294 DEBUG oslo_concurrency.lockutils [req-a0642e00-a8e0-4a92-bcca-c97448709a60 req-2116cfa8-abe3-4838-b826-ceae24901988 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:36:40 compute-0 nova_compute[251290]: 2026-02-02 11:36:40.479 251294 DEBUG oslo_concurrency.lockutils [req-a0642e00-a8e0-4a92-bcca-c97448709a60 req-2116cfa8-abe3-4838-b826-ceae24901988 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:36:40 compute-0 nova_compute[251290]: 2026-02-02 11:36:40.480 251294 DEBUG nova.network.neutron [req-a0642e00-a8e0-4a92-bcca-c97448709a60 req-2116cfa8-abe3-4838-b826-ceae24901988 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Refreshing network info cache for port 281a7e60-30d1-4ce3-825e-626d8446b90a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:36:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:36:40.748 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:36:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:41.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:41.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:41 compute-0 ceph-mon[74676]: pgmap v747: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 02 11:36:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:41 compute-0 nova_compute[251290]: 2026-02-02 11:36:41.931 251294 DEBUG nova.network.neutron [req-a0642e00-a8e0-4a92-bcca-c97448709a60 req-2116cfa8-abe3-4838-b826-ceae24901988 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updated VIF entry in instance network info cache for port 281a7e60-30d1-4ce3-825e-626d8446b90a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:36:41 compute-0 nova_compute[251290]: 2026-02-02 11:36:41.932 251294 DEBUG nova.network.neutron [req-a0642e00-a8e0-4a92-bcca-c97448709a60 req-2116cfa8-abe3-4838-b826-ceae24901988 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updating instance_info_cache with network_info: [{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:36:41 compute-0 nova_compute[251290]: 2026-02-02 11:36:41.953 251294 DEBUG oslo_concurrency.lockutils [req-a0642e00-a8e0-4a92-bcca-c97448709a60 req-2116cfa8-abe3-4838-b826-ceae24901988 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:36:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v748: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 02 11:36:42 compute-0 nova_compute[251290]: 2026-02-02 11:36:42.915 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:43.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:43.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:43 compute-0 ceph-mon[74676]: pgmap v748: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 02 11:36:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:36:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2414885384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:36:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:36:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2414885384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:36:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 02 11:36:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2414885384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:36:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2414885384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:36:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:36:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:36:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:36:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:45.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:36:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:45.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:45 compute-0 ceph-mon[74676]: pgmap v749: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 02 11:36:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:36:45 compute-0 nova_compute[251290]: 2026-02-02 11:36:45.433 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:45 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Scheduled restart job, restart counter is at 9.
Feb 02 11:36:45 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:36:45 compute-0 systemd[1]: ceph-1d33f80b-d6ca-501c-bac7-184379b89279@nfs.cephfs.2.0.compute-0.lrvhze.service: Consumed 1.367s CPU time.
Feb 02 11:36:45 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279...
Feb 02 11:36:45 compute-0 sudo[260374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:36:45 compute-0 sudo[260374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:36:45 compute-0 sudo[260374]: pam_unix(sudo:session): session closed for user root
Feb 02 11:36:45 compute-0 podman[260412]: 2026-02-02 11:36:45.663799161 +0000 UTC m=+0.050872820 container create d3ddad00564de16556687d2fd72c8c1c0a0fd92bbb83986771142660e53edb15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca601ce47abdfadc1ccd529fd226d459fabff3611a207a0dc53a623d940f1af/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca601ce47abdfadc1ccd529fd226d459fabff3611a207a0dc53a623d940f1af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca601ce47abdfadc1ccd529fd226d459fabff3611a207a0dc53a623d940f1af/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca601ce47abdfadc1ccd529fd226d459fabff3611a207a0dc53a623d940f1af/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.lrvhze-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:36:45 compute-0 podman[260412]: 2026-02-02 11:36:45.640128952 +0000 UTC m=+0.027202421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:36:45 compute-0 podman[260412]: 2026-02-02 11:36:45.741235391 +0000 UTC m=+0.128308850 container init d3ddad00564de16556687d2fd72c8c1c0a0fd92bbb83986771142660e53edb15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:36:45 compute-0 podman[260412]: 2026-02-02 11:36:45.747768359 +0000 UTC m=+0.134841798 container start d3ddad00564de16556687d2fd72c8c1c0a0fd92bbb83986771142660e53edb15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:36:45 compute-0 bash[260412]: d3ddad00564de16556687d2fd72c8c1c0a0fd92bbb83986771142660e53edb15
Feb 02 11:36:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb 02 11:36:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb 02 11:36:45 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.lrvhze for 1d33f80b-d6ca-501c-bac7-184379b89279.
Feb 02 11:36:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb 02 11:36:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb 02 11:36:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb 02 11:36:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb 02 11:36:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb 02 11:36:45 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:36:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v750: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 02 11:36:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:46] "GET /metrics HTTP/1.1" 200 48388 "" "Prometheus/2.51.0"
Feb 02 11:36:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:46] "GET /metrics HTTP/1.1" 200 48388 "" "Prometheus/2.51.0"
Feb 02 11:36:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:36:47.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:36:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:36:47.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:36:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:36:47.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:36:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:47.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:36:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:47.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:36:47 compute-0 ceph-mon[74676]: pgmap v750: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 02 11:36:47 compute-0 nova_compute[251290]: 2026-02-02 11:36:47.961 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Feb 02 11:36:48 compute-0 ovn_controller[154901]: 2026-02-02T11:36:48Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:25:9b:d9 10.100.0.5
Feb 02 11:36:48 compute-0 ovn_controller[154901]: 2026-02-02T11:36:48Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:9b:d9 10.100.0.5
Feb 02 11:36:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:49.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:36:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:49.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:36:49 compute-0 ceph-mon[74676]: pgmap v751: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Feb 02 11:36:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Feb 02 11:36:50 compute-0 nova_compute[251290]: 2026-02-02 11:36:50.489 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:51.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:36:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:51.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:36:51 compute-0 ceph-mon[74676]: pgmap v752: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Feb 02 11:36:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:36:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:36:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:36:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v753: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Feb 02 11:36:52 compute-0 ceph-mon[74676]: pgmap v753: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Feb 02 11:36:52 compute-0 nova_compute[251290]: 2026-02-02 11:36:52.964 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:53.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:36:54 compute-0 nova_compute[251290]: 2026-02-02 11:36:54.982 251294 INFO nova.compute.manager [None req-0069d859-6f99-4450-96bf-60643348299f abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Get console output
Feb 02 11:36:54 compute-0 nova_compute[251290]: 2026-02-02 11:36:54.989 258588 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 02 11:36:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:55.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:55 compute-0 ceph-mon[74676]: pgmap v754: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:36:55 compute-0 nova_compute[251290]: 2026-02-02 11:36:55.491 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:36:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:36:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:36:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:36:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:36:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:36:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:36:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:56] "GET /metrics HTTP/1.1" 200 48388 "" "Prometheus/2.51.0"
Feb 02 11:36:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:36:56] "GET /metrics HTTP/1.1" 200 48388 "" "Prometheus/2.51.0"
Feb 02 11:36:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:36:57.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:36:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:36:57.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:36:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:57.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:57.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:57 compute-0 ceph-mon[74676]: pgmap v755: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:36:57 compute-0 nova_compute[251290]: 2026-02-02 11:36:57.966 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:36:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:36:58 compute-0 nova_compute[251290]: 2026-02-02 11:36:58.842 251294 DEBUG oslo_concurrency.lockutils [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "interface-1cd1ff52-5053-47d8-96b1-171866a19914-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:36:58 compute-0 nova_compute[251290]: 2026-02-02 11:36:58.843 251294 DEBUG oslo_concurrency.lockutils [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "interface-1cd1ff52-5053-47d8-96b1-171866a19914-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:36:58 compute-0 nova_compute[251290]: 2026-02-02 11:36:58.843 251294 DEBUG nova.objects.instance [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'flavor' on Instance uuid 1cd1ff52-5053-47d8-96b1-171866a19914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:36:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:36:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:36:59.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:36:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:36:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:36:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:36:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:36:59 compute-0 nova_compute[251290]: 2026-02-02 11:36:59.234 251294 DEBUG nova.objects.instance [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'pci_requests' on Instance uuid 1cd1ff52-5053-47d8-96b1-171866a19914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:36:59 compute-0 nova_compute[251290]: 2026-02-02 11:36:59.250 251294 DEBUG nova.network.neutron [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 11:36:59 compute-0 nova_compute[251290]: 2026-02-02 11:36:59.393 251294 DEBUG nova.policy [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abee87546a344ef285e2e269d2c74792', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3240aa599bd249a3b72e42fcc47af557', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 11:36:59 compute-0 ceph-mon[74676]: pgmap v756: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:36:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:36:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:36:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:36:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:36:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:36:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:36:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:36:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:37:00 compute-0 nova_compute[251290]: 2026-02-02 11:37:00.157 251294 DEBUG nova.network.neutron [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Successfully created port: 94f072e6-276a-4f44-a3af-bed20f3eba43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 11:37:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:37:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:37:00 compute-0 nova_compute[251290]: 2026-02-02 11:37:00.494 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:00 compute-0 nova_compute[251290]: 2026-02-02 11:37:00.949 251294 DEBUG nova.network.neutron [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Successfully updated port: 94f072e6-276a-4f44-a3af-bed20f3eba43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 11:37:00 compute-0 nova_compute[251290]: 2026-02-02 11:37:00.969 251294 DEBUG oslo_concurrency.lockutils [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:37:00 compute-0 nova_compute[251290]: 2026-02-02 11:37:00.970 251294 DEBUG oslo_concurrency.lockutils [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquired lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:37:00 compute-0 nova_compute[251290]: 2026-02-02 11:37:00.970 251294 DEBUG nova.network.neutron [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 11:37:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:01 compute-0 nova_compute[251290]: 2026-02-02 11:37:01.070 251294 DEBUG nova.compute.manager [req-b09e3075-24d5-4da6-a416-d1c9d280b73b req-48f785c1-83bb-4918-9219-b26ee815f898 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-changed-94f072e6-276a-4f44-a3af-bed20f3eba43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:37:01 compute-0 nova_compute[251290]: 2026-02-02 11:37:01.070 251294 DEBUG nova.compute.manager [req-b09e3075-24d5-4da6-a416-d1c9d280b73b req-48f785c1-83bb-4918-9219-b26ee815f898 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Refreshing instance network info cache due to event network-changed-94f072e6-276a-4f44-a3af-bed20f3eba43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:37:01 compute-0 nova_compute[251290]: 2026-02-02 11:37:01.070 251294 DEBUG oslo_concurrency.lockutils [req-b09e3075-24d5-4da6-a416-d1c9d280b73b req-48f785c1-83bb-4918-9219-b26ee815f898 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:37:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:01.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:37:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:01.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:37:01 compute-0 ceph-mon[74676]: pgmap v757: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:37:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.609 251294 DEBUG nova.network.neutron [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updating instance_info_cache with network_info: [{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.629 251294 DEBUG oslo_concurrency.lockutils [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Releasing lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.630 251294 DEBUG oslo_concurrency.lockutils [req-b09e3075-24d5-4da6-a416-d1c9d280b73b req-48f785c1-83bb-4918-9219-b26ee815f898 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.630 251294 DEBUG nova.network.neutron [req-b09e3075-24d5-4da6-a416-d1c9d280b73b req-48f785c1-83bb-4918-9219-b26ee815f898 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Refreshing network info cache for port 94f072e6-276a-4f44-a3af-bed20f3eba43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.634 251294 DEBUG nova.virt.libvirt.vif [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-219986626',display_name='tempest-TestNetworkBasicOps-server-219986626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-219986626',id=3,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALPivmHF0+GB+pjPrAwqaEZhCPK2QrIACo9j+dHanM9cfHXTkE0jGmzzKkKGhA9eeUbvbvO6MMpEC0fpPFfP8NzaoVP2iqZSTSdZJjnyGp/vR+G1xbLVIz8nFJBYnUgZg==',key_name='tempest-TestNetworkBasicOps-70721191',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:36:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-yvoq90ce',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:36:35Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=1cd1ff52-5053-47d8-96b1-171866a19914,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.634 251294 DEBUG nova.network.os_vif_util [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.634 251294 DEBUG nova.network.os_vif_util [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.635 251294 DEBUG os_vif [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.636 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.636 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.636 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.639 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.639 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94f072e6-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.639 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94f072e6-27, col_values=(('external_ids', {'iface-id': '94f072e6-276a-4f44-a3af-bed20f3eba43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:dd:7e', 'vm-uuid': '1cd1ff52-5053-47d8-96b1-171866a19914'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:02 compute-0 NetworkManager[49067]: <info>  [1770032222.6422] manager: (tap94f072e6-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.642 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.648 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.650 251294 INFO os_vif [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27')
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.651 251294 DEBUG nova.virt.libvirt.vif [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-219986626',display_name='tempest-TestNetworkBasicOps-server-219986626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-219986626',id=3,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALPivmHF0+GB+pjPrAwqaEZhCPK2QrIACo9j+dHanM9cfHXTkE0jGmzzKkKGhA9eeUbvbvO6MMpEC0fpPFfP8NzaoVP2iqZSTSdZJjnyGp/vR+G1xbLVIz8nFJBYnUgZg==',key_name='tempest-TestNetworkBasicOps-70721191',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:36:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-yvoq90ce',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:36:35Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=1cd1ff52-5053-47d8-96b1-171866a19914,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.652 251294 DEBUG nova.network.os_vif_util [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.653 251294 DEBUG nova.network.os_vif_util [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.655 251294 DEBUG nova.virt.libvirt.guest [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] attach device xml: <interface type="ethernet">
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <mac address="fa:16:3e:0c:dd:7e"/>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <model type="virtio"/>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <mtu size="1442"/>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <target dev="tap94f072e6-27"/>
Feb 02 11:37:02 compute-0 nova_compute[251290]: </interface>
Feb 02 11:37:02 compute-0 nova_compute[251290]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 11:37:02 compute-0 kernel: tap94f072e6-27: entered promiscuous mode
Feb 02 11:37:02 compute-0 NetworkManager[49067]: <info>  [1770032222.6683] manager: (tap94f072e6-27): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Feb 02 11:37:02 compute-0 ovn_controller[154901]: 2026-02-02T11:37:02Z|00045|binding|INFO|Claiming lport 94f072e6-276a-4f44-a3af-bed20f3eba43 for this chassis.
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.668 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:02 compute-0 ovn_controller[154901]: 2026-02-02T11:37:02Z|00046|binding|INFO|94f072e6-276a-4f44-a3af-bed20f3eba43: Claiming fa:16:3e:0c:dd:7e 10.100.0.24
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.678 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:dd:7e 10.100.0.24'], port_security=['fa:16:3e:0c:dd:7e 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '1cd1ff52-5053-47d8-96b1-171866a19914', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8385ea2c-e171-49bd-9f80-d42c25829bfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b1e05b1-1f2d-464d-ab65-bb650bbe0f35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69a18aad-9efb-4d18-befa-e35eaabd2b41, chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=94f072e6-276a-4f44-a3af-bed20f3eba43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.679 165304 INFO neutron.agent.ovn.metadata.agent [-] Port 94f072e6-276a-4f44-a3af-bed20f3eba43 in datapath 8385ea2c-e171-49bd-9f80-d42c25829bfe bound to our chassis
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.681 165304 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8385ea2c-e171-49bd-9f80-d42c25829bfe
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.687 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:02 compute-0 ovn_controller[154901]: 2026-02-02T11:37:02Z|00047|binding|INFO|Setting lport 94f072e6-276a-4f44-a3af-bed20f3eba43 ovn-installed in OVS
Feb 02 11:37:02 compute-0 ovn_controller[154901]: 2026-02-02T11:37:02Z|00048|binding|INFO|Setting lport 94f072e6-276a-4f44-a3af-bed20f3eba43 up in Southbound
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.691 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.695 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[fed02757-a461-48db-a701-c8c828919ad1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.697 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8385ea2c-e1 in ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.700 258380 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8385ea2c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.701 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[63854c92-503d-41cf-9528-9556f1bb389e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 systemd-udevd[260494]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.702 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[07627557-ec30-4c8c-ac65-711f6a4bc1d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 NetworkManager[49067]: <info>  [1770032222.7178] device (tap94f072e6-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 11:37:02 compute-0 NetworkManager[49067]: <info>  [1770032222.7188] device (tap94f072e6-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.717 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8263f1-c737-4a97-b861-3df6a3c1c981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.742 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9a6387-2efe-4749-9fab-793700dbb4b4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.747 251294 DEBUG nova.virt.libvirt.driver [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.748 251294 DEBUG nova.virt.libvirt.driver [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.748 251294 DEBUG nova.virt.libvirt.driver [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No VIF found with MAC fa:16:3e:25:9b:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.749 251294 DEBUG nova.virt.libvirt.driver [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No VIF found with MAC fa:16:3e:0c:dd:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.775 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd1074a-f0dd-40d3-9d3f-3d3befb3f042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.780 251294 DEBUG nova.virt.libvirt.guest [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-219986626</nova:name>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:37:02</nova:creationTime>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:37:02 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:37:02 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:37:02 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:37:02 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:37:02 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:37:02 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:37:02 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:37:02 compute-0 nova_compute[251290]:     <nova:port uuid="281a7e60-30d1-4ce3-825e-626d8446b90a">
Feb 02 11:37:02 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 11:37:02 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:37:02 compute-0 nova_compute[251290]:     <nova:port uuid="94f072e6-276a-4f44-a3af-bed20f3eba43">
Feb 02 11:37:02 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 02 11:37:02 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:37:02 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:37:02 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:37:02 compute-0 nova_compute[251290]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.780 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[f77d4dc7-cce0-44fd-bebd-5f97cfc5b18d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 systemd-udevd[260497]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:37:02 compute-0 NetworkManager[49067]: <info>  [1770032222.7820] manager: (tap8385ea2c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Feb 02 11:37:02 compute-0 nova_compute[251290]: 2026-02-02 11:37:02.809 251294 DEBUG oslo_concurrency.lockutils [None req-3bcb8a7b-52be-45cf-949c-1dadd47792f4 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "interface-1cd1ff52-5053-47d8-96b1-171866a19914-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 3.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.817 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb3b7c5-815b-4a0c-8934-c7e2be370922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.821 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdb48fe-7c59-46e0-b641-5a242c456cb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 NetworkManager[49067]: <info>  [1770032222.8468] device (tap8385ea2c-e0): carrier: link connected
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.850 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[55ff6f10-6837-415e-816a-0b7ed5d1e3b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.872 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[bef8a032-fec9-4900-84f2-18a2e78be1b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8385ea2c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:6a:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387711, 'reachable_time': 33222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260520, 'error': None, 'target': 'ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.887 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7be777-4680-4415-8d96-beea253a4d88]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:6a8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387711, 'tstamp': 387711}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260521, 'error': None, 'target': 'ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.906 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[eee69f1a-4da7-4966-9a8f-6b11af5e5904]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8385ea2c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:6a:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387711, 'reachable_time': 33222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260523, 'error': None, 'target': 'ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:02 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:02.942 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d87e61-6f05-40d4-9acd-669f4dec5da6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:03.000 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[e82bf9ac-ac65-4a15-bcd8-65514bfbb2cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:03.002 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8385ea2c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:03.003 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:03.003 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8385ea2c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:03 compute-0 kernel: tap8385ea2c-e0: entered promiscuous mode
Feb 02 11:37:03 compute-0 nova_compute[251290]: 2026-02-02 11:37:03.006 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:03 compute-0 NetworkManager[49067]: <info>  [1770032223.0073] manager: (tap8385ea2c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:03.011 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8385ea2c-e0, col_values=(('external_ids', {'iface-id': '5a9d4641-e20a-4528-9099-09e98d7d97ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:03 compute-0 ovn_controller[154901]: 2026-02-02T11:37:03Z|00049|binding|INFO|Releasing lport 5a9d4641-e20a-4528-9099-09e98d7d97ac from this chassis (sb_readonly=0)
Feb 02 11:37:03 compute-0 nova_compute[251290]: 2026-02-02 11:37:03.013 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:03.014 165304 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8385ea2c-e171-49bd-9f80-d42c25829bfe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8385ea2c-e171-49bd-9f80-d42c25829bfe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:03.015 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[f56a03b4-c8df-4064-95fc-f51da177328d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:03.016 165304 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: global
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     log         /dev/log local0 debug
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     log-tag     haproxy-metadata-proxy-8385ea2c-e171-49bd-9f80-d42c25829bfe
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     user        root
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     group       root
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     maxconn     1024
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     pidfile     /var/lib/neutron/external/pids/8385ea2c-e171-49bd-9f80-d42c25829bfe.pid.haproxy
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     daemon
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: defaults
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     log global
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     mode http
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     option httplog
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     option dontlognull
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     option http-server-close
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     option forwardfor
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     retries                 3
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     timeout http-request    30s
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     timeout connect         30s
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     timeout client          32s
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     timeout server          32s
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     timeout http-keep-alive 30s
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: listen listener
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     bind 169.254.169.254:80
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:     http-request add-header X-OVN-Network-ID 8385ea2c-e171-49bd-9f80-d42c25829bfe
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 11:37:03 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:03.016 165304 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe', 'env', 'PROCESS_TAG=haproxy-8385ea2c-e171-49bd-9f80-d42c25829bfe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8385ea2c-e171-49bd-9f80-d42c25829bfe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 11:37:03 compute-0 nova_compute[251290]: 2026-02-02 11:37:03.018 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:03.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [WARNING] 032/113703 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 02 11:37:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [NOTICE] 032/113703 (4) : haproxy version is 2.3.17-d1c9119
Feb 02 11:37:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [NOTICE] 032/113703 (4) : path to executable is /usr/local/sbin/haproxy
Feb 02 11:37:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa[97883]: [ALERT] 032/113703 (4) : backend 'backend' has no server available!
Feb 02 11:37:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:03.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:03 compute-0 podman[260555]: 2026-02-02 11:37:03.459729613 +0000 UTC m=+0.089040284 container create 0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 02 11:37:03 compute-0 ceph-mon[74676]: pgmap v758: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:37:03 compute-0 podman[260555]: 2026-02-02 11:37:03.392625919 +0000 UTC m=+0.021936590 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 11:37:03 compute-0 systemd[1]: Started libpod-conmon-0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea.scope.
Feb 02 11:37:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa12d49311eb43236a24d756c13ba90a2bd9aa36e7f3dd52a168bf0c183a9ce4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:03 compute-0 podman[260555]: 2026-02-02 11:37:03.543308139 +0000 UTC m=+0.172618810 container init 0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:37:03 compute-0 podman[260555]: 2026-02-02 11:37:03.550212977 +0000 UTC m=+0.179523628 container start 0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 02 11:37:03 compute-0 neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe[260570]: [NOTICE]   (260574) : New worker (260576) forked
Feb 02 11:37:03 compute-0 neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe[260570]: [NOTICE]   (260574) : Loading success.
Feb 02 11:37:04 compute-0 sudo[260586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:37:04 compute-0 sudo[260586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:04 compute-0 sudo[260586]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb 02 11:37:04 compute-0 sudo[260611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:37:04 compute-0 sudo[260611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:04 compute-0 sudo[260611]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:04 compute-0 nova_compute[251290]: 2026-02-02 11:37:04.959 251294 DEBUG nova.compute.manager [req-26d242d9-3421-4086-b945-8c90c379d343 req-2945d646-9183-4c6d-bcbb-f6cfc2525963 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-plugged-94f072e6-276a-4f44-a3af-bed20f3eba43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:37:04 compute-0 nova_compute[251290]: 2026-02-02 11:37:04.961 251294 DEBUG oslo_concurrency.lockutils [req-26d242d9-3421-4086-b945-8c90c379d343 req-2945d646-9183-4c6d-bcbb-f6cfc2525963 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:04 compute-0 nova_compute[251290]: 2026-02-02 11:37:04.962 251294 DEBUG oslo_concurrency.lockutils [req-26d242d9-3421-4086-b945-8c90c379d343 req-2945d646-9183-4c6d-bcbb-f6cfc2525963 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:04 compute-0 nova_compute[251290]: 2026-02-02 11:37:04.962 251294 DEBUG oslo_concurrency.lockutils [req-26d242d9-3421-4086-b945-8c90c379d343 req-2945d646-9183-4c6d-bcbb-f6cfc2525963 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:04 compute-0 nova_compute[251290]: 2026-02-02 11:37:04.962 251294 DEBUG nova.compute.manager [req-26d242d9-3421-4086-b945-8c90c379d343 req-2945d646-9183-4c6d-bcbb-f6cfc2525963 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] No waiting events found dispatching network-vif-plugged-94f072e6-276a-4f44-a3af-bed20f3eba43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:37:04 compute-0 nova_compute[251290]: 2026-02-02 11:37:04.962 251294 WARNING nova.compute.manager [req-26d242d9-3421-4086-b945-8c90c379d343 req-2945d646-9183-4c6d-bcbb-f6cfc2525963 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received unexpected event network-vif-plugged-94f072e6-276a-4f44-a3af-bed20f3eba43 for instance with vm_state active and task_state None.
Feb 02 11:37:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:37:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:05.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:37:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:05.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:05 compute-0 ovn_controller[154901]: 2026-02-02T11:37:05Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:dd:7e 10.100.0.24
Feb 02 11:37:05 compute-0 ovn_controller[154901]: 2026-02-02T11:37:05Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:dd:7e 10.100.0.24
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.498 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:05 compute-0 ceph-mon[74676]: pgmap v759: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.563 251294 DEBUG oslo_concurrency.lockutils [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "interface-1cd1ff52-5053-47d8-96b1-171866a19914-94f072e6-276a-4f44-a3af-bed20f3eba43" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.564 251294 DEBUG oslo_concurrency.lockutils [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "interface-1cd1ff52-5053-47d8-96b1-171866a19914-94f072e6-276a-4f44-a3af-bed20f3eba43" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.586 251294 DEBUG nova.objects.instance [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'flavor' on Instance uuid 1cd1ff52-5053-47d8-96b1-171866a19914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.602 251294 DEBUG nova.network.neutron [req-b09e3075-24d5-4da6-a416-d1c9d280b73b req-48f785c1-83bb-4918-9219-b26ee815f898 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updated VIF entry in instance network info cache for port 94f072e6-276a-4f44-a3af-bed20f3eba43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.603 251294 DEBUG nova.network.neutron [req-b09e3075-24d5-4da6-a416-d1c9d280b73b req-48f785c1-83bb-4918-9219-b26ee815f898 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updating instance_info_cache with network_info: [{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.609 251294 DEBUG nova.virt.libvirt.vif [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-219986626',display_name='tempest-TestNetworkBasicOps-server-219986626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-219986626',id=3,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALPivmHF0+GB+pjPrAwqaEZhCPK2QrIACo9j+dHanM9cfHXTkE0jGmzzKkKGhA9eeUbvbvO6MMpEC0fpPFfP8NzaoVP2iqZSTSdZJjnyGp/vR+G1xbLVIz8nFJBYnUgZg==',key_name='tempest-TestNetworkBasicOps-70721191',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:36:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-yvoq90ce',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:36:35Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=1cd1ff52-5053-47d8-96b1-171866a19914,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.610 251294 DEBUG nova.network.os_vif_util [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.610 251294 DEBUG nova.network.os_vif_util [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.615 251294 DEBUG nova.virt.libvirt.guest [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0c:dd:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap94f072e6-27"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.617 251294 DEBUG nova.virt.libvirt.guest [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0c:dd:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap94f072e6-27"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.620 251294 DEBUG nova.virt.libvirt.driver [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Attempting to detach device tap94f072e6-27 from instance 1cd1ff52-5053-47d8-96b1-171866a19914 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.621 251294 DEBUG nova.virt.libvirt.guest [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] detach device xml: <interface type="ethernet">
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <mac address="fa:16:3e:0c:dd:7e"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <model type="virtio"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <mtu size="1442"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <target dev="tap94f072e6-27"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]: </interface>
Feb 02 11:37:05 compute-0 nova_compute[251290]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.622 251294 DEBUG oslo_concurrency.lockutils [req-b09e3075-24d5-4da6-a416-d1c9d280b73b req-48f785c1-83bb-4918-9219-b26ee815f898 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.628 251294 DEBUG nova.virt.libvirt.guest [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0c:dd:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap94f072e6-27"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.633 251294 DEBUG nova.virt.libvirt.guest [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0c:dd:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap94f072e6-27"/></interface>not found in domain: <domain type='kvm' id='2'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <name>instance-00000003</name>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <uuid>1cd1ff52-5053-47d8-96b1-171866a19914</uuid>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-219986626</nova:name>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:37:02</nova:creationTime>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:port uuid="281a7e60-30d1-4ce3-825e-626d8446b90a">
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:port uuid="94f072e6-276a-4f44-a3af-bed20f3eba43">
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:37:05 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <memory unit='KiB'>131072</memory>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <vcpu placement='static'>1</vcpu>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <resource>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <partition>/machine</partition>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </resource>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <sysinfo type='smbios'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <system>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='manufacturer'>RDO</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='product'>OpenStack Compute</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='serial'>1cd1ff52-5053-47d8-96b1-171866a19914</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='uuid'>1cd1ff52-5053-47d8-96b1-171866a19914</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='family'>Virtual Machine</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </system>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <os>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <boot dev='hd'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <smbios mode='sysinfo'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </os>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <features>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <vmcoreinfo state='on'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </features>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <cpu mode='custom' match='exact' check='full'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <vendor>AMD</vendor>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='x2apic'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc-deadline'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='hypervisor'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc_adjust'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='spec-ctrl'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='stibp'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='ssbd'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='cmp_legacy'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='overflow-recov'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='succor'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='ibrs'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='amd-ssbd'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='virt-ssbd'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='lbrv'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='tsc-scale'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='vmcb-clean'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='flushbyasid'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='pause-filter'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='pfthreshold'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='xsaves'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='svm'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='topoext'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='npt'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='nrip-save'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <clock offset='utc'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <timer name='pit' tickpolicy='delay'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <timer name='hpet' present='no'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <on_poweroff>destroy</on_poweroff>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <on_reboot>restart</on_reboot>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <on_crash>destroy</on_crash>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <disk type='network' device='disk'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/1cd1ff52-5053-47d8-96b1-171866a19914_disk' index='2'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       </source>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target dev='vda' bus='virtio'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='virtio-disk0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <disk type='network' device='cdrom'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/1cd1ff52-5053-47d8-96b1-171866a19914_disk.config' index='1'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       </source>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target dev='sda' bus='sata'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <readonly/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='sata0-0-0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='0' model='pcie-root'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pcie.0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='1' port='0x10'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='2' port='0x11'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='3' port='0x12'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.3'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='4' port='0x13'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.4'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='5' port='0x14'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.5'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='6' port='0x15'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.6'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='7' port='0x16'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.7'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='8' port='0x17'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.8'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='9' port='0x18'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.9'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='10' port='0x19'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.10'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='11' port='0x1a'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.11'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='12' port='0x1b'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.12'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='13' port='0x1c'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.13'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='14' port='0x1d'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.14'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='15' port='0x1e'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.15'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='16' port='0x1f'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.16'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='17' port='0x20'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.17'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='18' port='0x21'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.18'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='19' port='0x22'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.19'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='20' port='0x23'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.20'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='21' port='0x24'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.21'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='22' port='0x25'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.22'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='23' port='0x26'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.23'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='24' port='0x27'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.24'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='25' port='0x28'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.25'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-pci-bridge'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.26'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='usb'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='sata' index='0'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='ide'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <interface type='ethernet'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <mac address='fa:16:3e:25:9b:d9'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target dev='tap281a7e60-30'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model type='virtio'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <driver name='vhost' rx_queue_size='512'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <mtu size='1442'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='net0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <interface type='ethernet'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <mac address='fa:16:3e:0c:dd:7e'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target dev='tap94f072e6-27'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model type='virtio'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <driver name='vhost' rx_queue_size='512'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <mtu size='1442'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='net1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <serial type='pty'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/console.log' append='off'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target type='isa-serial' port='0'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <model name='isa-serial'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       </target>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <console type='pty' tty='/dev/pts/0'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/console.log' append='off'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target type='serial' port='0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </console>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <input type='tablet' bus='usb'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='input0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='usb' bus='0' port='1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <input type='mouse' bus='ps2'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='input1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <input type='keyboard' bus='ps2'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='input2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <listen type='address' address='::0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <audio id='1' type='none'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <video>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model type='virtio' heads='1' primary='yes'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='video0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </video>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <watchdog model='itco' action='reset'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='watchdog0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </watchdog>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <memballoon model='virtio'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <stats period='10'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='balloon0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <rng model='virtio'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <backend model='random'>/dev/urandom</backend>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='rng0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <label>system_u:system_r:svirt_t:s0:c254,c839</label>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c254,c839</imagelabel>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <label>+107:+107</label>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <imagelabel>+107:+107</imagelabel>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:37:05 compute-0 nova_compute[251290]: </domain>
Feb 02 11:37:05 compute-0 nova_compute[251290]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.633 251294 INFO nova.virt.libvirt.driver [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully detached device tap94f072e6-27 from instance 1cd1ff52-5053-47d8-96b1-171866a19914 from the persistent domain config.
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.634 251294 DEBUG nova.virt.libvirt.driver [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] (1/8): Attempting to detach device tap94f072e6-27 with device alias net1 from instance 1cd1ff52-5053-47d8-96b1-171866a19914 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.634 251294 DEBUG nova.virt.libvirt.guest [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] detach device xml: <interface type="ethernet">
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <mac address="fa:16:3e:0c:dd:7e"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <model type="virtio"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <mtu size="1442"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <target dev="tap94f072e6-27"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]: </interface>
Feb 02 11:37:05 compute-0 nova_compute[251290]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 11:37:05 compute-0 sudo[260671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:37:05 compute-0 sudo[260671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:05 compute-0 kernel: tap94f072e6-27 (unregistering): left promiscuous mode
Feb 02 11:37:05 compute-0 sudo[260671]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:05 compute-0 NetworkManager[49067]: <info>  [1770032225.6956] device (tap94f072e6-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 11:37:05 compute-0 ovn_controller[154901]: 2026-02-02T11:37:05Z|00050|binding|INFO|Releasing lport 94f072e6-276a-4f44-a3af-bed20f3eba43 from this chassis (sb_readonly=0)
Feb 02 11:37:05 compute-0 ovn_controller[154901]: 2026-02-02T11:37:05Z|00051|binding|INFO|Setting lport 94f072e6-276a-4f44-a3af-bed20f3eba43 down in Southbound
Feb 02 11:37:05 compute-0 ovn_controller[154901]: 2026-02-02T11:37:05Z|00052|binding|INFO|Removing iface tap94f072e6-27 ovn-installed in OVS
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.706 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.710 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.712 251294 DEBUG nova.virt.libvirt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Received event <DeviceRemovedEvent: 1770032225.7120395, 1cd1ff52-5053-47d8-96b1-171866a19914 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.715 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.716 251294 DEBUG nova.virt.libvirt.driver [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Start waiting for the detach event from libvirt for device tap94f072e6-27 with device alias net1 for instance 1cd1ff52-5053-47d8-96b1-171866a19914 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.717 251294 DEBUG nova.virt.libvirt.guest [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0c:dd:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap94f072e6-27"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.723 251294 DEBUG nova.virt.libvirt.guest [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0c:dd:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap94f072e6-27"/></interface>not found in domain: <domain type='kvm' id='2'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <name>instance-00000003</name>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <uuid>1cd1ff52-5053-47d8-96b1-171866a19914</uuid>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-219986626</nova:name>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:37:02</nova:creationTime>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:port uuid="281a7e60-30d1-4ce3-825e-626d8446b90a">
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:port uuid="94f072e6-276a-4f44-a3af-bed20f3eba43">
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:37:05 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <memory unit='KiB'>131072</memory>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <vcpu placement='static'>1</vcpu>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <resource>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <partition>/machine</partition>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </resource>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <sysinfo type='smbios'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <system>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='manufacturer'>RDO</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='product'>OpenStack Compute</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='serial'>1cd1ff52-5053-47d8-96b1-171866a19914</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='uuid'>1cd1ff52-5053-47d8-96b1-171866a19914</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <entry name='family'>Virtual Machine</entry>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </system>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <os>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <boot dev='hd'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <smbios mode='sysinfo'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </os>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <features>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <vmcoreinfo state='on'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </features>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <cpu mode='custom' match='exact' check='full'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <vendor>AMD</vendor>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='x2apic'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc-deadline'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='hypervisor'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc_adjust'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='spec-ctrl'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='stibp'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='ssbd'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='cmp_legacy'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='overflow-recov'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='succor'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='ibrs'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='amd-ssbd'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='virt-ssbd'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='lbrv'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='tsc-scale'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='vmcb-clean'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='flushbyasid'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='pause-filter'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='pfthreshold'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='xsaves'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='svm'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='require' name='topoext'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='npt'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <feature policy='disable' name='nrip-save'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <clock offset='utc'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <timer name='pit' tickpolicy='delay'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <timer name='hpet' present='no'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <on_poweroff>destroy</on_poweroff>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <on_reboot>restart</on_reboot>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <on_crash>destroy</on_crash>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <disk type='network' device='disk'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/1cd1ff52-5053-47d8-96b1-171866a19914_disk' index='2'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       </source>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target dev='vda' bus='virtio'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='virtio-disk0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <disk type='network' device='cdrom'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/1cd1ff52-5053-47d8-96b1-171866a19914_disk.config' index='1'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       </source>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target dev='sda' bus='sata'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <readonly/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='sata0-0-0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='0' model='pcie-root'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pcie.0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='1' port='0x10'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='2' port='0x11'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='3' port='0x12'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.3'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='4' port='0x13'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.4'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='5' port='0x14'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.5'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='6' port='0x15'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.6'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='7' port='0x16'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.7'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='8' port='0x17'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.8'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='9' port='0x18'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.9'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='10' port='0x19'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.10'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='11' port='0x1a'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.11'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='12' port='0x1b'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.12'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='13' port='0x1c'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.13'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='14' port='0x1d'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.14'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='15' port='0x1e'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.15'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='16' port='0x1f'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.16'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='17' port='0x20'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.17'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='18' port='0x21'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.18'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='19' port='0x22'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.19'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='20' port='0x23'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.20'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='21' port='0x24'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.21'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='22' port='0x25'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.22'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='23' port='0x26'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.23'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='24' port='0x27'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.24'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target chassis='25' port='0x28'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.25'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model name='pcie-pci-bridge'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='pci.26'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='usb'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <controller type='sata' index='0'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='ide'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <interface type='ethernet'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <mac address='fa:16:3e:25:9b:d9'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target dev='tap281a7e60-30'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model type='virtio'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <driver name='vhost' rx_queue_size='512'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <mtu size='1442'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='net0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <serial type='pty'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/console.log' append='off'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target type='isa-serial' port='0'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:         <model name='isa-serial'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       </target>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <console type='pty' tty='/dev/pts/0'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/console.log' append='off'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <target type='serial' port='0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </console>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <input type='tablet' bus='usb'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='input0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='usb' bus='0' port='1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <input type='mouse' bus='ps2'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='input1'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <input type='keyboard' bus='ps2'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='input2'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <listen type='address' address='::0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <audio id='1' type='none'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <video>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <model type='virtio' heads='1' primary='yes'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='video0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </video>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <watchdog model='itco' action='reset'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='watchdog0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </watchdog>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <memballoon model='virtio'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <stats period='10'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='balloon0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <rng model='virtio'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <backend model='random'>/dev/urandom</backend>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <alias name='rng0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <label>system_u:system_r:svirt_t:s0:c254,c839</label>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c254,c839</imagelabel>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <label>+107:+107</label>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <imagelabel>+107:+107</imagelabel>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:37:05 compute-0 nova_compute[251290]: </domain>
Feb 02 11:37:05 compute-0 nova_compute[251290]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.724 251294 INFO nova.virt.libvirt.driver [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully detached device tap94f072e6-27 from instance 1cd1ff52-5053-47d8-96b1-171866a19914 from the live domain config.
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.724 251294 DEBUG nova.virt.libvirt.vif [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-219986626',display_name='tempest-TestNetworkBasicOps-server-219986626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-219986626',id=3,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALPivmHF0+GB+pjPrAwqaEZhCPK2QrIACo9j+dHanM9cfHXTkE0jGmzzKkKGhA9eeUbvbvO6MMpEC0fpPFfP8NzaoVP2iqZSTSdZJjnyGp/vR+G1xbLVIz8nFJBYnUgZg==',key_name='tempest-TestNetworkBasicOps-70721191',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:36:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-yvoq90ce',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:36:35Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=1cd1ff52-5053-47d8-96b1-171866a19914,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.725 251294 DEBUG nova.network.os_vif_util [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:37:05 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:05.724 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:dd:7e 10.100.0.24'], port_security=['fa:16:3e:0c:dd:7e 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '1cd1ff52-5053-47d8-96b1-171866a19914', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8385ea2c-e171-49bd-9f80-d42c25829bfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b1e05b1-1f2d-464d-ab65-bb650bbe0f35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69a18aad-9efb-4d18-befa-e35eaabd2b41, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=94f072e6-276a-4f44-a3af-bed20f3eba43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.725 251294 DEBUG nova.network.os_vif_util [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.726 251294 DEBUG os_vif [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.728 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.728 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94f072e6-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:05 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:05.727 165304 INFO neutron.agent.ovn.metadata.agent [-] Port 94f072e6-276a-4f44-a3af-bed20f3eba43 in datapath 8385ea2c-e171-49bd-9f80-d42c25829bfe unbound from our chassis
Feb 02 11:37:05 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:05.730 165304 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8385ea2c-e171-49bd-9f80-d42c25829bfe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.731 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.732 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:05 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:05.731 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfeb725-1c24-4f46-a676-8005e9be1d50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:05 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:05.734 165304 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe namespace which is not needed anymore
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.734 251294 INFO os_vif [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27')
Feb 02 11:37:05 compute-0 nova_compute[251290]: 2026-02-02 11:37:05.735 251294 DEBUG nova.virt.libvirt.guest [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-219986626</nova:name>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:37:05</nova:creationTime>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     <nova:port uuid="281a7e60-30d1-4ce3-825e-626d8446b90a">
Feb 02 11:37:05 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 11:37:05 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:37:05 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:37:05 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:37:05 compute-0 nova_compute[251290]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 02 11:37:05 compute-0 neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe[260570]: [NOTICE]   (260574) : haproxy version is 2.8.14-c23fe91
Feb 02 11:37:05 compute-0 neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe[260570]: [NOTICE]   (260574) : path to executable is /usr/sbin/haproxy
Feb 02 11:37:05 compute-0 neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe[260570]: [WARNING]  (260574) : Exiting Master process...
Feb 02 11:37:05 compute-0 neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe[260570]: [ALERT]    (260574) : Current worker (260576) exited with code 143 (Terminated)
Feb 02 11:37:05 compute-0 neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe[260570]: [WARNING]  (260574) : All workers exited. Exiting... (0)
Feb 02 11:37:05 compute-0 systemd[1]: libpod-0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea.scope: Deactivated successfully.
Feb 02 11:37:05 compute-0 podman[260719]: 2026-02-02 11:37:05.872663598 +0000 UTC m=+0.045170587 container died 0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 11:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea-userdata-shm.mount: Deactivated successfully.
Feb 02 11:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa12d49311eb43236a24d756c13ba90a2bd9aa36e7f3dd52a168bf0c183a9ce4-merged.mount: Deactivated successfully.
Feb 02 11:37:05 compute-0 podman[260719]: 2026-02-02 11:37:05.920939912 +0000 UTC m=+0.093446891 container cleanup 0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 02 11:37:05 compute-0 systemd[1]: libpod-conmon-0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea.scope: Deactivated successfully.
Feb 02 11:37:05 compute-0 podman[260749]: 2026-02-02 11:37:05.997714623 +0000 UTC m=+0.059058694 container remove 0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:37:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:06 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:06.002 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[24c1eded-a50d-43f8-bd0f-11a34f53f1c3]: (4, ('Mon Feb  2 11:37:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe (0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea)\n0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea\nMon Feb  2 11:37:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe (0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea)\n0d4eb68b9a867d87992704f06a674990251787ee45b4fda6df24c5a125f061ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:06 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:06.004 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[c044dc81-fd51-4a08-8658-b645c954c7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:06 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:06.005 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8385ea2c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:06 compute-0 nova_compute[251290]: 2026-02-02 11:37:06.008 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:06 compute-0 kernel: tap8385ea2c-e0: left promiscuous mode
Feb 02 11:37:06 compute-0 nova_compute[251290]: 2026-02-02 11:37:06.009 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:06 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:06.013 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[b598be86-d4a4-45a7-a4a1-d2e50fc6c868]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:06 compute-0 nova_compute[251290]: 2026-02-02 11:37:06.016 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:06 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:06.028 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[3af3a9b9-f64b-4b75-8a70-2d5bb66e4a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:06 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:06.030 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[92229056-81cc-4de2-a842-119e9b829a29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:06 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:06.044 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[e0137334-00c6-4b72-8438-d51e76b50c6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387703, 'reachable_time': 28790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260765, 'error': None, 'target': 'ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d8385ea2c\x2de171\x2d49bd\x2d9f80\x2dd42c25829bfe.mount: Deactivated successfully.
Feb 02 11:37:06 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:06.048 165875 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8385ea2c-e171-49bd-9f80-d42c25829bfe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 11:37:06 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:06.048 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2a9f87-f460-4260-b8fb-db84fddd2dea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:37:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:37:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 16 KiB/s wr, 2 op/s
Feb 02 11:37:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:37:06 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:37:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:37:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:37:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 5.4 KiB/s wr, 1 op/s
Feb 02 11:37:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:37:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:37:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:37:06 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:37:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:37:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:37:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:37:06 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:37:06 compute-0 sudo[260766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:37:06 compute-0 sudo[260766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:06 compute-0 sudo[260766]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:06 compute-0 sudo[260791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:37:06 compute-0 sudo[260791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:06] "GET /metrics HTTP/1.1" 200 48386 "" "Prometheus/2.51.0"
Feb 02 11:37:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:06] "GET /metrics HTTP/1.1" 200 48386 "" "Prometheus/2.51.0"
Feb 02 11:37:07 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.100 251294 DEBUG nova.compute.manager [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-plugged-94f072e6-276a-4f44-a3af-bed20f3eba43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.101 251294 DEBUG oslo_concurrency.lockutils [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.101 251294 DEBUG oslo_concurrency.lockutils [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.101 251294 DEBUG oslo_concurrency.lockutils [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.101 251294 DEBUG nova.compute.manager [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] No waiting events found dispatching network-vif-plugged-94f072e6-276a-4f44-a3af-bed20f3eba43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.101 251294 WARNING nova.compute.manager [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received unexpected event network-vif-plugged-94f072e6-276a-4f44-a3af-bed20f3eba43 for instance with vm_state active and task_state None.
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.102 251294 DEBUG nova.compute.manager [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-unplugged-94f072e6-276a-4f44-a3af-bed20f3eba43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.102 251294 DEBUG oslo_concurrency.lockutils [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.102 251294 DEBUG oslo_concurrency.lockutils [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.102 251294 DEBUG oslo_concurrency.lockutils [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.102 251294 DEBUG nova.compute.manager [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] No waiting events found dispatching network-vif-unplugged-94f072e6-276a-4f44-a3af-bed20f3eba43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.102 251294 WARNING nova.compute.manager [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received unexpected event network-vif-unplugged-94f072e6-276a-4f44-a3af-bed20f3eba43 for instance with vm_state active and task_state None.
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.102 251294 DEBUG nova.compute.manager [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-plugged-94f072e6-276a-4f44-a3af-bed20f3eba43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.103 251294 DEBUG oslo_concurrency.lockutils [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.103 251294 DEBUG oslo_concurrency.lockutils [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.103 251294 DEBUG oslo_concurrency.lockutils [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.103 251294 DEBUG nova.compute.manager [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] No waiting events found dispatching network-vif-plugged-94f072e6-276a-4f44-a3af-bed20f3eba43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.103 251294 WARNING nova.compute.manager [req-8761f0fe-d91e-4f45-9cb7-ed423dbc8c4c req-45a26e4f-30a0-46dd-8ab4-23111153acfc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received unexpected event network-vif-plugged-94f072e6-276a-4f44-a3af-bed20f3eba43 for instance with vm_state active and task_state None.
Feb 02 11:37:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:37:07.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:37:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:07 compute-0 ceph-mon[74676]: pgmap v760: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 16 KiB/s wr, 2 op/s
Feb 02 11:37:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:37:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:37:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:37:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:37:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:37:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:07.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:07.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:07 compute-0 podman[260857]: 2026-02-02 11:37:07.241039663 +0000 UTC m=+0.083370962 container create c1fe586e3f3aca2438420c4b6aac8c013db3709e47d3c5808313d6e26834b6e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:37:07 compute-0 podman[260857]: 2026-02-02 11:37:07.185284904 +0000 UTC m=+0.027616213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:37:07 compute-0 systemd[1]: Started libpod-conmon-c1fe586e3f3aca2438420c4b6aac8c013db3709e47d3c5808313d6e26834b6e2.scope.
Feb 02 11:37:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:37:07 compute-0 podman[260857]: 2026-02-02 11:37:07.315082146 +0000 UTC m=+0.157413455 container init c1fe586e3f3aca2438420c4b6aac8c013db3709e47d3c5808313d6e26834b6e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:37:07 compute-0 podman[260857]: 2026-02-02 11:37:07.322453097 +0000 UTC m=+0.164784386 container start c1fe586e3f3aca2438420c4b6aac8c013db3709e47d3c5808313d6e26834b6e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:37:07 compute-0 podman[260857]: 2026-02-02 11:37:07.326510653 +0000 UTC m=+0.168841962 container attach c1fe586e3f3aca2438420c4b6aac8c013db3709e47d3c5808313d6e26834b6e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:37:07 compute-0 boring_wright[260873]: 167 167
Feb 02 11:37:07 compute-0 systemd[1]: libpod-c1fe586e3f3aca2438420c4b6aac8c013db3709e47d3c5808313d6e26834b6e2.scope: Deactivated successfully.
Feb 02 11:37:07 compute-0 conmon[260873]: conmon c1fe586e3f3aca243842 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1fe586e3f3aca2438420c4b6aac8c013db3709e47d3c5808313d6e26834b6e2.scope/container/memory.events
Feb 02 11:37:07 compute-0 podman[260857]: 2026-02-02 11:37:07.330789396 +0000 UTC m=+0.173120685 container died c1fe586e3f3aca2438420c4b6aac8c013db3709e47d3c5808313d6e26834b6e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:37:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c3d514d84f02bfeedb225f7a473e49a6d0a89627517c715aa804ee778c7a9cf-merged.mount: Deactivated successfully.
Feb 02 11:37:07 compute-0 podman[260857]: 2026-02-02 11:37:07.379436671 +0000 UTC m=+0.221767960 container remove c1fe586e3f3aca2438420c4b6aac8c013db3709e47d3c5808313d6e26834b6e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:37:07 compute-0 systemd[1]: libpod-conmon-c1fe586e3f3aca2438420c4b6aac8c013db3709e47d3c5808313d6e26834b6e2.scope: Deactivated successfully.
Feb 02 11:37:07 compute-0 podman[260899]: 2026-02-02 11:37:07.520195667 +0000 UTC m=+0.042672855 container create 7a1e657fc1c7c1fa41f7fdbd436120cf9457bd23c74fcea07b1c15de6ea8d5dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:37:07 compute-0 systemd[1]: Started libpod-conmon-7a1e657fc1c7c1fa41f7fdbd436120cf9457bd23c74fcea07b1c15de6ea8d5dd.scope.
Feb 02 11:37:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa27c36f8a74cd597525ac8205b029dd2ef8cc14b50f9a32bea47bb4701e549/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa27c36f8a74cd597525ac8205b029dd2ef8cc14b50f9a32bea47bb4701e549/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa27c36f8a74cd597525ac8205b029dd2ef8cc14b50f9a32bea47bb4701e549/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa27c36f8a74cd597525ac8205b029dd2ef8cc14b50f9a32bea47bb4701e549/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa27c36f8a74cd597525ac8205b029dd2ef8cc14b50f9a32bea47bb4701e549/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:07 compute-0 podman[260899]: 2026-02-02 11:37:07.501521661 +0000 UTC m=+0.023998879 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:37:07 compute-0 podman[260899]: 2026-02-02 11:37:07.607416077 +0000 UTC m=+0.129893285 container init 7a1e657fc1c7c1fa41f7fdbd436120cf9457bd23c74fcea07b1c15de6ea8d5dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:37:07 compute-0 podman[260899]: 2026-02-02 11:37:07.615383975 +0000 UTC m=+0.137861163 container start 7a1e657fc1c7c1fa41f7fdbd436120cf9457bd23c74fcea07b1c15de6ea8d5dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ellis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:37:07 compute-0 podman[260899]: 2026-02-02 11:37:07.619299327 +0000 UTC m=+0.141776535 container attach 7a1e657fc1c7c1fa41f7fdbd436120cf9457bd23c74fcea07b1c15de6ea8d5dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.743 251294 DEBUG oslo_concurrency.lockutils [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.744 251294 DEBUG oslo_concurrency.lockutils [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquired lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:37:07 compute-0 nova_compute[251290]: 2026-02-02 11:37:07.744 251294 DEBUG nova.network.neutron [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 11:37:07 compute-0 strange_ellis[260916]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:37:07 compute-0 strange_ellis[260916]: --> All data devices are unavailable
Feb 02 11:37:07 compute-0 systemd[1]: libpod-7a1e657fc1c7c1fa41f7fdbd436120cf9457bd23c74fcea07b1c15de6ea8d5dd.scope: Deactivated successfully.
Feb 02 11:37:07 compute-0 podman[260932]: 2026-02-02 11:37:07.99293507 +0000 UTC m=+0.020828458 container died 7a1e657fc1c7c1fa41f7fdbd436120cf9457bd23c74fcea07b1c15de6ea8d5dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-baa27c36f8a74cd597525ac8205b029dd2ef8cc14b50f9a32bea47bb4701e549-merged.mount: Deactivated successfully.
Feb 02 11:37:08 compute-0 podman[260932]: 2026-02-02 11:37:08.030492317 +0000 UTC m=+0.058385695 container remove 7a1e657fc1c7c1fa41f7fdbd436120cf9457bd23c74fcea07b1c15de6ea8d5dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ellis, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:37:08 compute-0 systemd[1]: libpod-conmon-7a1e657fc1c7c1fa41f7fdbd436120cf9457bd23c74fcea07b1c15de6ea8d5dd.scope: Deactivated successfully.
Feb 02 11:37:08 compute-0 sudo[260791]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:08 compute-0 ceph-mon[74676]: pgmap v761: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 5.4 KiB/s wr, 1 op/s
Feb 02 11:37:08 compute-0 ceph-mon[74676]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb 02 11:37:08 compute-0 sudo[260948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:37:08 compute-0 sudo[260948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:08 compute-0 sudo[260948]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:08 compute-0 sudo[260973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:37:08 compute-0 sudo[260973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:08 compute-0 podman[261037]: 2026-02-02 11:37:08.616884021 +0000 UTC m=+0.043660223 container create af4663ac0e9de1ee99e952c3d09340428de7893040be97f9075cc44f633df9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:37:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 5.4 KiB/s wr, 1 op/s
Feb 02 11:37:08 compute-0 systemd[1]: Started libpod-conmon-af4663ac0e9de1ee99e952c3d09340428de7893040be97f9075cc44f633df9fc.scope.
Feb 02 11:37:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:37:08 compute-0 podman[261037]: 2026-02-02 11:37:08.598104632 +0000 UTC m=+0.024880874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:37:08 compute-0 podman[261037]: 2026-02-02 11:37:08.702929048 +0000 UTC m=+0.129705280 container init af4663ac0e9de1ee99e952c3d09340428de7893040be97f9075cc44f633df9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:37:08 compute-0 podman[261037]: 2026-02-02 11:37:08.709350552 +0000 UTC m=+0.136126754 container start af4663ac0e9de1ee99e952c3d09340428de7893040be97f9075cc44f633df9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cohen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:37:08 compute-0 podman[261037]: 2026-02-02 11:37:08.713246914 +0000 UTC m=+0.140023116 container attach af4663ac0e9de1ee99e952c3d09340428de7893040be97f9075cc44f633df9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:37:08 compute-0 elated_cohen[261053]: 167 167
Feb 02 11:37:08 compute-0 systemd[1]: libpod-af4663ac0e9de1ee99e952c3d09340428de7893040be97f9075cc44f633df9fc.scope: Deactivated successfully.
Feb 02 11:37:08 compute-0 conmon[261053]: conmon af4663ac0e9de1ee99e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af4663ac0e9de1ee99e952c3d09340428de7893040be97f9075cc44f633df9fc.scope/container/memory.events
Feb 02 11:37:08 compute-0 podman[261037]: 2026-02-02 11:37:08.715320573 +0000 UTC m=+0.142096775 container died af4663ac0e9de1ee99e952c3d09340428de7893040be97f9075cc44f633df9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cohen, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb 02 11:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e2cf4fd3cd6acf2dc653c46213145172fabc23402241e5fab2c688afc45f8ee-merged.mount: Deactivated successfully.
Feb 02 11:37:08 compute-0 podman[261037]: 2026-02-02 11:37:08.75429165 +0000 UTC m=+0.181067852 container remove af4663ac0e9de1ee99e952c3d09340428de7893040be97f9075cc44f633df9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cohen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:37:08 compute-0 systemd[1]: libpod-conmon-af4663ac0e9de1ee99e952c3d09340428de7893040be97f9075cc44f633df9fc.scope: Deactivated successfully.
Feb 02 11:37:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:08 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:08 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:08 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:08 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:08 compute-0 podman[261078]: 2026-02-02 11:37:08.909258044 +0000 UTC m=+0.052237799 container create 7b78f84dbf7eb65047b61aa48056324f1b08c28ee645a312b02070f564fdda6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Feb 02 11:37:08 compute-0 ovn_controller[154901]: 2026-02-02T11:37:08Z|00053|binding|INFO|Releasing lport d62f21c1-3ed5-4b7a-baaf-242cc3d5f303 from this chassis (sb_readonly=0)
Feb 02 11:37:08 compute-0 nova_compute[251290]: 2026-02-02 11:37:08.947 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:08 compute-0 systemd[1]: Started libpod-conmon-7b78f84dbf7eb65047b61aa48056324f1b08c28ee645a312b02070f564fdda6e.scope.
Feb 02 11:37:08 compute-0 podman[261078]: 2026-02-02 11:37:08.885344318 +0000 UTC m=+0.028324103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:37:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf9df32a55b9e5885d124b99989ce088f2c5b4d94ea3e4b82a7a59cfa98c7a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf9df32a55b9e5885d124b99989ce088f2c5b4d94ea3e4b82a7a59cfa98c7a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf9df32a55b9e5885d124b99989ce088f2c5b4d94ea3e4b82a7a59cfa98c7a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf9df32a55b9e5885d124b99989ce088f2c5b4d94ea3e4b82a7a59cfa98c7a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:09 compute-0 podman[261078]: 2026-02-02 11:37:09.001475758 +0000 UTC m=+0.144455523 container init 7b78f84dbf7eb65047b61aa48056324f1b08c28ee645a312b02070f564fdda6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:37:09 compute-0 podman[261078]: 2026-02-02 11:37:09.007576953 +0000 UTC m=+0.150556708 container start 7b78f84dbf7eb65047b61aa48056324f1b08c28ee645a312b02070f564fdda6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_germain, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 11:37:09 compute-0 podman[261078]: 2026-02-02 11:37:09.013142402 +0000 UTC m=+0.156122197 container attach 7b78f84dbf7eb65047b61aa48056324f1b08c28ee645a312b02070f564fdda6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_germain, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:37:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:37:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:09.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:37:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:09.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.231 251294 DEBUG nova.compute.manager [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-deleted-94f072e6-276a-4f44-a3af-bed20f3eba43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.232 251294 INFO nova.compute.manager [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Neutron deleted interface 94f072e6-276a-4f44-a3af-bed20f3eba43; detaching it from the instance and deleting it from the info cache
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.233 251294 DEBUG nova.network.neutron [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updating instance_info_cache with network_info: [{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.278 251294 DEBUG nova.objects.instance [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lazy-loading 'system_metadata' on Instance uuid 1cd1ff52-5053-47d8-96b1-171866a19914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:37:09 compute-0 epic_germain[261095]: {
Feb 02 11:37:09 compute-0 epic_germain[261095]:     "1": [
Feb 02 11:37:09 compute-0 epic_germain[261095]:         {
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "devices": [
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "/dev/loop3"
Feb 02 11:37:09 compute-0 epic_germain[261095]:             ],
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "lv_name": "ceph_lv0",
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "lv_size": "21470642176",
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "name": "ceph_lv0",
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "tags": {
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.cluster_name": "ceph",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.crush_device_class": "",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.encrypted": "0",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.osd_id": "1",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.type": "block",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.vdo": "0",
Feb 02 11:37:09 compute-0 epic_germain[261095]:                 "ceph.with_tpm": "0"
Feb 02 11:37:09 compute-0 epic_germain[261095]:             },
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "type": "block",
Feb 02 11:37:09 compute-0 epic_germain[261095]:             "vg_name": "ceph_vg0"
Feb 02 11:37:09 compute-0 epic_germain[261095]:         }
Feb 02 11:37:09 compute-0 epic_germain[261095]:     ]
Feb 02 11:37:09 compute-0 epic_germain[261095]: }
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.333 251294 DEBUG nova.objects.instance [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lazy-loading 'flavor' on Instance uuid 1cd1ff52-5053-47d8-96b1-171866a19914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.342 251294 INFO nova.network.neutron [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Port 94f072e6-276a-4f44-a3af-bed20f3eba43 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.342 251294 DEBUG nova.network.neutron [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updating instance_info_cache with network_info: [{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:37:09 compute-0 systemd[1]: libpod-7b78f84dbf7eb65047b61aa48056324f1b08c28ee645a312b02070f564fdda6e.scope: Deactivated successfully.
Feb 02 11:37:09 compute-0 conmon[261095]: conmon 7b78f84dbf7eb65047b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b78f84dbf7eb65047b61aa48056324f1b08c28ee645a312b02070f564fdda6e.scope/container/memory.events
Feb 02 11:37:09 compute-0 podman[261078]: 2026-02-02 11:37:09.353279695 +0000 UTC m=+0.496259470 container died 7b78f84dbf7eb65047b61aa48056324f1b08c28ee645a312b02070f564fdda6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_germain, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.382 251294 DEBUG nova.virt.libvirt.vif [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-219986626',display_name='tempest-TestNetworkBasicOps-server-219986626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-219986626',id=3,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALPivmHF0+GB+pjPrAwqaEZhCPK2QrIACo9j+dHanM9cfHXTkE0jGmzzKkKGhA9eeUbvbvO6MMpEC0fpPFfP8NzaoVP2iqZSTSdZJjnyGp/vR+G1xbLVIz8nFJBYnUgZg==',key_name='tempest-TestNetworkBasicOps-70721191',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:36:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-yvoq90ce',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:36:35Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=1cd1ff52-5053-47d8-96b1-171866a19914,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.382 251294 DEBUG nova.network.os_vif_util [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Converting VIF {"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.383 251294 DEBUG nova.network.os_vif_util [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.386 251294 DEBUG nova.virt.libvirt.guest [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0c:dd:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap94f072e6-27"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.387 251294 DEBUG oslo_concurrency.lockutils [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Releasing lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.392 251294 DEBUG nova.virt.libvirt.guest [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0c:dd:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap94f072e6-27"/></interface>not found in domain: <domain type='kvm' id='2'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <name>instance-00000003</name>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <uuid>1cd1ff52-5053-47d8-96b1-171866a19914</uuid>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-219986626</nova:name>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:37:05</nova:creationTime>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:port uuid="281a7e60-30d1-4ce3-825e-626d8446b90a">
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:37:09 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <memory unit='KiB'>131072</memory>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <vcpu placement='static'>1</vcpu>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <resource>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <partition>/machine</partition>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </resource>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <sysinfo type='smbios'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <system>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='manufacturer'>RDO</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='product'>OpenStack Compute</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='serial'>1cd1ff52-5053-47d8-96b1-171866a19914</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='uuid'>1cd1ff52-5053-47d8-96b1-171866a19914</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='family'>Virtual Machine</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </system>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <os>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <boot dev='hd'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <smbios mode='sysinfo'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </os>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <features>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <vmcoreinfo state='on'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </features>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <cpu mode='custom' match='exact' check='full'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <vendor>AMD</vendor>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='x2apic'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc-deadline'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='hypervisor'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc_adjust'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='spec-ctrl'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='stibp'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='ssbd'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='cmp_legacy'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='overflow-recov'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='succor'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='ibrs'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='amd-ssbd'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='virt-ssbd'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='lbrv'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='tsc-scale'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='vmcb-clean'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='flushbyasid'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='pause-filter'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='pfthreshold'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='xsaves'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='svm'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='topoext'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='npt'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='nrip-save'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <clock offset='utc'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <timer name='pit' tickpolicy='delay'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <timer name='hpet' present='no'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <on_poweroff>destroy</on_poweroff>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <on_reboot>restart</on_reboot>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <on_crash>destroy</on_crash>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <disk type='network' device='disk'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/1cd1ff52-5053-47d8-96b1-171866a19914_disk' index='2'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       </source>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target dev='vda' bus='virtio'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='virtio-disk0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <disk type='network' device='cdrom'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/1cd1ff52-5053-47d8-96b1-171866a19914_disk.config' index='1'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       </source>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target dev='sda' bus='sata'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <readonly/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='sata0-0-0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='0' model='pcie-root'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pcie.0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='1' port='0x10'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='2' port='0x11'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='3' port='0x12'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.3'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='4' port='0x13'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.4'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='5' port='0x14'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.5'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='6' port='0x15'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.6'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='7' port='0x16'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.7'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='8' port='0x17'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.8'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='9' port='0x18'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.9'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='10' port='0x19'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.10'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='11' port='0x1a'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.11'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='12' port='0x1b'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.12'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='13' port='0x1c'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.13'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='14' port='0x1d'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.14'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='15' port='0x1e'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.15'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='16' port='0x1f'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.16'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='17' port='0x20'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.17'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='18' port='0x21'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.18'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='19' port='0x22'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.19'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='20' port='0x23'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.20'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='21' port='0x24'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.21'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='22' port='0x25'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.22'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='23' port='0x26'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.23'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='24' port='0x27'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.24'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='25' port='0x28'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.25'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-pci-bridge'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.26'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='usb'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='sata' index='0'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='ide'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <interface type='ethernet'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <mac address='fa:16:3e:25:9b:d9'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target dev='tap281a7e60-30'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model type='virtio'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <driver name='vhost' rx_queue_size='512'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <mtu size='1442'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='net0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <serial type='pty'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/console.log' append='off'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target type='isa-serial' port='0'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <model name='isa-serial'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       </target>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <console type='pty' tty='/dev/pts/0'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/console.log' append='off'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target type='serial' port='0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </console>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <input type='tablet' bus='usb'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='input0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='usb' bus='0' port='1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <input type='mouse' bus='ps2'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='input1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <input type='keyboard' bus='ps2'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='input2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <listen type='address' address='::0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <audio id='1' type='none'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <video>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model type='virtio' heads='1' primary='yes'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='video0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </video>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <watchdog model='itco' action='reset'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='watchdog0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </watchdog>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <memballoon model='virtio'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <stats period='10'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='balloon0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <rng model='virtio'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <backend model='random'>/dev/urandom</backend>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='rng0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <label>system_u:system_r:svirt_t:s0:c254,c839</label>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c254,c839</imagelabel>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <label>+107:+107</label>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <imagelabel>+107:+107</imagelabel>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:37:09 compute-0 nova_compute[251290]: </domain>
Feb 02 11:37:09 compute-0 nova_compute[251290]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.395 251294 DEBUG nova.virt.libvirt.guest [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0c:dd:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap94f072e6-27"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdf9df32a55b9e5885d124b99989ce088f2c5b4d94ea3e4b82a7a59cfa98c7a3-merged.mount: Deactivated successfully.
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.404 251294 DEBUG nova.virt.libvirt.guest [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0c:dd:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap94f072e6-27"/></interface>not found in domain: <domain type='kvm' id='2'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <name>instance-00000003</name>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <uuid>1cd1ff52-5053-47d8-96b1-171866a19914</uuid>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-219986626</nova:name>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:37:05</nova:creationTime>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:port uuid="281a7e60-30d1-4ce3-825e-626d8446b90a">
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:37:09 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <memory unit='KiB'>131072</memory>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <vcpu placement='static'>1</vcpu>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <resource>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <partition>/machine</partition>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </resource>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <sysinfo type='smbios'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <system>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='manufacturer'>RDO</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='product'>OpenStack Compute</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='serial'>1cd1ff52-5053-47d8-96b1-171866a19914</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='uuid'>1cd1ff52-5053-47d8-96b1-171866a19914</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <entry name='family'>Virtual Machine</entry>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </system>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <os>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <boot dev='hd'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <smbios mode='sysinfo'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </os>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <features>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <vmcoreinfo state='on'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </features>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <cpu mode='custom' match='exact' check='full'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <vendor>AMD</vendor>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='x2apic'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc-deadline'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='hypervisor'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc_adjust'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='spec-ctrl'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='stibp'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='ssbd'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='cmp_legacy'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='overflow-recov'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='succor'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='ibrs'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='amd-ssbd'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='virt-ssbd'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='lbrv'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='tsc-scale'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='vmcb-clean'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='flushbyasid'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='pause-filter'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='pfthreshold'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='xsaves'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='svm'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='require' name='topoext'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='npt'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <feature policy='disable' name='nrip-save'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <clock offset='utc'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <timer name='pit' tickpolicy='delay'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <timer name='hpet' present='no'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <on_poweroff>destroy</on_poweroff>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <on_reboot>restart</on_reboot>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <on_crash>destroy</on_crash>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <disk type='network' device='disk'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/1cd1ff52-5053-47d8-96b1-171866a19914_disk' index='2'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       </source>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target dev='vda' bus='virtio'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='virtio-disk0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <disk type='network' device='cdrom'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/1cd1ff52-5053-47d8-96b1-171866a19914_disk.config' index='1'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       </source>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target dev='sda' bus='sata'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <readonly/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='sata0-0-0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='0' model='pcie-root'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pcie.0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='1' port='0x10'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='2' port='0x11'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='3' port='0x12'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.3'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='4' port='0x13'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.4'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='5' port='0x14'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.5'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='6' port='0x15'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.6'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='7' port='0x16'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.7'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='8' port='0x17'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.8'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='9' port='0x18'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.9'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='10' port='0x19'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.10'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='11' port='0x1a'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.11'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='12' port='0x1b'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.12'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='13' port='0x1c'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.13'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='14' port='0x1d'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.14'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='15' port='0x1e'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.15'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='16' port='0x1f'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.16'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='17' port='0x20'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.17'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='18' port='0x21'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.18'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='19' port='0x22'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.19'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='20' port='0x23'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.20'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='21' port='0x24'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.21'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='22' port='0x25'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.22'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='23' port='0x26'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.23'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='24' port='0x27'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.24'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target chassis='25' port='0x28'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.25'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model name='pcie-pci-bridge'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='pci.26'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='usb'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <controller type='sata' index='0'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='ide'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <interface type='ethernet'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <mac address='fa:16:3e:25:9b:d9'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target dev='tap281a7e60-30'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model type='virtio'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <driver name='vhost' rx_queue_size='512'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <mtu size='1442'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='net0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <serial type='pty'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/console.log' append='off'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target type='isa-serial' port='0'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:         <model name='isa-serial'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       </target>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <console type='pty' tty='/dev/pts/0'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914/console.log' append='off'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <target type='serial' port='0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </console>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <input type='tablet' bus='usb'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='input0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='usb' bus='0' port='1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <input type='mouse' bus='ps2'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='input1'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <input type='keyboard' bus='ps2'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='input2'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </input>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <listen type='address' address='::0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <audio id='1' type='none'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <video>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <model type='virtio' heads='1' primary='yes'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='video0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </video>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <watchdog model='itco' action='reset'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='watchdog0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </watchdog>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <memballoon model='virtio'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <stats period='10'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='balloon0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <rng model='virtio'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <backend model='random'>/dev/urandom</backend>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <alias name='rng0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <label>system_u:system_r:svirt_t:s0:c254,c839</label>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c254,c839</imagelabel>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <label>+107:+107</label>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <imagelabel>+107:+107</imagelabel>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:37:09 compute-0 nova_compute[251290]: </domain>
Feb 02 11:37:09 compute-0 nova_compute[251290]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.405 251294 WARNING nova.virt.libvirt.driver [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Detaching interface fa:16:3e:0c:dd:7e failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap94f072e6-27' not found.
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.406 251294 DEBUG nova.virt.libvirt.vif [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-219986626',display_name='tempest-TestNetworkBasicOps-server-219986626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-219986626',id=3,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALPivmHF0+GB+pjPrAwqaEZhCPK2QrIACo9j+dHanM9cfHXTkE0jGmzzKkKGhA9eeUbvbvO6MMpEC0fpPFfP8NzaoVP2iqZSTSdZJjnyGp/vR+G1xbLVIz8nFJBYnUgZg==',key_name='tempest-TestNetworkBasicOps-70721191',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:36:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-yvoq90ce',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:36:35Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=1cd1ff52-5053-47d8-96b1-171866a19914,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.406 251294 DEBUG nova.network.os_vif_util [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Converting VIF {"id": "94f072e6-276a-4f44-a3af-bed20f3eba43", "address": "fa:16:3e:0c:dd:7e", "network": {"id": "8385ea2c-e171-49bd-9f80-d42c25829bfe", "bridge": "br-int", "label": "tempest-network-smoke--619908791", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94f072e6-27", "ovs_interfaceid": "94f072e6-276a-4f44-a3af-bed20f3eba43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.407 251294 DEBUG nova.network.os_vif_util [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.407 251294 DEBUG os_vif [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.409 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.409 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94f072e6-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.410 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.412 251294 INFO os_vif [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:dd:7e,bridge_name='br-int',has_traffic_filtering=True,id=94f072e6-276a-4f44-a3af-bed20f3eba43,network=Network(8385ea2c-e171-49bd-9f80-d42c25829bfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94f072e6-27')
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.413 251294 DEBUG nova.virt.libvirt.guest [req-12041e13-0159-44b4-928f-4cb1350a3bfc req-927e400c-54f4-4b7a-8134-2a839b44c991 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-219986626</nova:name>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:37:09</nova:creationTime>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     <nova:port uuid="281a7e60-30d1-4ce3-825e-626d8446b90a">
Feb 02 11:37:09 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 11:37:09 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:37:09 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:37:09 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:37:09 compute-0 nova_compute[251290]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.416 251294 DEBUG oslo_concurrency.lockutils [None req-3bf6630c-2b35-427b-a2ce-b6d74ef72a8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "interface-1cd1ff52-5053-47d8-96b1-171866a19914-94f072e6-276a-4f44-a3af-bed20f3eba43" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:09 compute-0 podman[261078]: 2026-02-02 11:37:09.434536945 +0000 UTC m=+0.577516700 container remove 7b78f84dbf7eb65047b61aa48056324f1b08c28ee645a312b02070f564fdda6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_germain, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb 02 11:37:09 compute-0 systemd[1]: libpod-conmon-7b78f84dbf7eb65047b61aa48056324f1b08c28ee645a312b02070f564fdda6e.scope: Deactivated successfully.
Feb 02 11:37:09 compute-0 sudo[260973]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:09 compute-0 podman[261104]: 2026-02-02 11:37:09.50724737 +0000 UTC m=+0.123955765 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 02 11:37:09 compute-0 podman[261112]: 2026-02-02 11:37:09.527154181 +0000 UTC m=+0.142951480 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 02 11:37:09 compute-0 sudo[261152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:37:09 compute-0 sudo[261152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:09 compute-0 sudo[261152]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:09 compute-0 sudo[261182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:37:09 compute-0 sudo[261182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.646 251294 DEBUG nova.compute.manager [req-34b078c9-2844-42d7-9d89-66e80e47f8ee req-aecc9f37-be95-4aa0-b3c0-3546c7c7c6b8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-changed-281a7e60-30d1-4ce3-825e-626d8446b90a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.647 251294 DEBUG nova.compute.manager [req-34b078c9-2844-42d7-9d89-66e80e47f8ee req-aecc9f37-be95-4aa0-b3c0-3546c7c7c6b8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Refreshing instance network info cache due to event network-changed-281a7e60-30d1-4ce3-825e-626d8446b90a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.647 251294 DEBUG oslo_concurrency.lockutils [req-34b078c9-2844-42d7-9d89-66e80e47f8ee req-aecc9f37-be95-4aa0-b3c0-3546c7c7c6b8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.647 251294 DEBUG oslo_concurrency.lockutils [req-34b078c9-2844-42d7-9d89-66e80e47f8ee req-aecc9f37-be95-4aa0-b3c0-3546c7c7c6b8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.647 251294 DEBUG nova.network.neutron [req-34b078c9-2844-42d7-9d89-66e80e47f8ee req-aecc9f37-be95-4aa0-b3c0-3546c7c7c6b8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Refreshing network info cache for port 281a7e60-30d1-4ce3-825e-626d8446b90a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.793 251294 DEBUG oslo_concurrency.lockutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.794 251294 DEBUG oslo_concurrency.lockutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.794 251294 DEBUG oslo_concurrency.lockutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.794 251294 DEBUG oslo_concurrency.lockutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.794 251294 DEBUG oslo_concurrency.lockutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.796 251294 INFO nova.compute.manager [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Terminating instance
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.797 251294 DEBUG nova.compute.manager [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 11:37:09 compute-0 kernel: tap281a7e60-30 (unregistering): left promiscuous mode
Feb 02 11:37:09 compute-0 NetworkManager[49067]: <info>  [1770032229.8529] device (tap281a7e60-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 11:37:09 compute-0 ovn_controller[154901]: 2026-02-02T11:37:09Z|00054|binding|INFO|Releasing lport 281a7e60-30d1-4ce3-825e-626d8446b90a from this chassis (sb_readonly=0)
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.861 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:09 compute-0 ovn_controller[154901]: 2026-02-02T11:37:09Z|00055|binding|INFO|Setting lport 281a7e60-30d1-4ce3-825e-626d8446b90a down in Southbound
Feb 02 11:37:09 compute-0 ovn_controller[154901]: 2026-02-02T11:37:09Z|00056|binding|INFO|Removing iface tap281a7e60-30 ovn-installed in OVS
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.864 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:09 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:09.869 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:9b:d9 10.100.0.5'], port_security=['fa:16:3e:25:9b:d9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1cd1ff52-5053-47d8-96b1-171866a19914', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c79859ff-e41c-4640-9671-2a0c24d00af5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b4490b1-257b-4b94-9254-fc57212f9074, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=281a7e60-30d1-4ce3-825e-626d8446b90a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:37:09 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:09.870 165304 INFO neutron.agent.ovn.metadata.agent [-] Port 281a7e60-30d1-4ce3-825e-626d8446b90a in datapath c2dab3f7-3551-4121-b4ad-e3c2a2b264e7 unbound from our chassis
Feb 02 11:37:09 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:09.872 165304 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c2dab3f7-3551-4121-b4ad-e3c2a2b264e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 11:37:09 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:09.873 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[0614c033-8338-458f-940e-4ba7fd11f97a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:09 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:09.874 165304 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7 namespace which is not needed anymore
Feb 02 11:37:09 compute-0 nova_compute[251290]: 2026-02-02 11:37:09.876 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:09 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb 02 11:37:09 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 13.977s CPU time.
Feb 02 11:37:09 compute-0 systemd-machined[218018]: Machine qemu-2-instance-00000003 terminated.
Feb 02 11:37:10 compute-0 neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7[260263]: [NOTICE]   (260272) : haproxy version is 2.8.14-c23fe91
Feb 02 11:37:10 compute-0 neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7[260263]: [NOTICE]   (260272) : path to executable is /usr/sbin/haproxy
Feb 02 11:37:10 compute-0 neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7[260263]: [WARNING]  (260272) : Exiting Master process...
Feb 02 11:37:10 compute-0 neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7[260263]: [ALERT]    (260272) : Current worker (260275) exited with code 143 (Terminated)
Feb 02 11:37:10 compute-0 neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7[260263]: [WARNING]  (260272) : All workers exited. Exiting... (0)
Feb 02 11:37:10 compute-0 systemd[1]: libpod-d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19.scope: Deactivated successfully.
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.020 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:10 compute-0 podman[261269]: 2026-02-02 11:37:10.021536036 +0000 UTC m=+0.049638244 container died d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.024 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.034 251294 INFO nova.virt.libvirt.driver [-] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Instance destroyed successfully.
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.034 251294 DEBUG nova.objects.instance [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'resources' on Instance uuid 1cd1ff52-5053-47d8-96b1-171866a19914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.064 251294 DEBUG nova.virt.libvirt.vif [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-219986626',display_name='tempest-TestNetworkBasicOps-server-219986626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-219986626',id=3,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALPivmHF0+GB+pjPrAwqaEZhCPK2QrIACo9j+dHanM9cfHXTkE0jGmzzKkKGhA9eeUbvbvO6MMpEC0fpPFfP8NzaoVP2iqZSTSdZJjnyGp/vR+G1xbLVIz8nFJBYnUgZg==',key_name='tempest-TestNetworkBasicOps-70721191',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:36:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-yvoq90ce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:36:35Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=1cd1ff52-5053-47d8-96b1-171866a19914,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.066 251294 DEBUG nova.network.os_vif_util [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.067 251294 DEBUG nova.network.os_vif_util [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:25:9b:d9,bridge_name='br-int',has_traffic_filtering=True,id=281a7e60-30d1-4ce3-825e-626d8446b90a,network=Network(c2dab3f7-3551-4121-b4ad-e3c2a2b264e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281a7e60-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.068 251294 DEBUG os_vif [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:9b:d9,bridge_name='br-int',has_traffic_filtering=True,id=281a7e60-30d1-4ce3-825e-626d8446b90a,network=Network(c2dab3f7-3551-4121-b4ad-e3c2a2b264e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281a7e60-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.069 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.070 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap281a7e60-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.075 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.078 251294 INFO os_vif [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:9b:d9,bridge_name='br-int',has_traffic_filtering=True,id=281a7e60-30d1-4ce3-825e-626d8446b90a,network=Network(c2dab3f7-3551-4121-b4ad-e3c2a2b264e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281a7e60-30')
Feb 02 11:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-707d2eb872c12eadda9ab6371b6218e3a12d3252a21643ecfd7a346292c47122-merged.mount: Deactivated successfully.
Feb 02 11:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19-userdata-shm.mount: Deactivated successfully.
Feb 02 11:37:10 compute-0 podman[261269]: 2026-02-02 11:37:10.166658687 +0000 UTC m=+0.194760895 container cleanup d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 02 11:37:10 compute-0 podman[261283]: 2026-02-02 11:37:10.074666839 +0000 UTC m=+0.070999807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:37:10 compute-0 ceph-mon[74676]: pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 5.4 KiB/s wr, 1 op/s
Feb 02 11:37:10 compute-0 systemd[1]: libpod-conmon-d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19.scope: Deactivated successfully.
Feb 02 11:37:10 compute-0 podman[261283]: 2026-02-02 11:37:10.211358759 +0000 UTC m=+0.207691717 container create b56ef39d3c3467e7c372ebda90c442c728b8823e811f22f9182c1709659f8d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:37:10 compute-0 systemd[1]: Started libpod-conmon-b56ef39d3c3467e7c372ebda90c442c728b8823e811f22f9182c1709659f8d9a.scope.
Feb 02 11:37:10 compute-0 podman[261343]: 2026-02-02 11:37:10.263576306 +0000 UTC m=+0.070048170 container remove d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 02 11:37:10 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:10.269 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[540eddbf-d985-4ecf-9803-d7c1a6f96734]: (4, ('Mon Feb  2 11:37:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7 (d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19)\nd1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19\nMon Feb  2 11:37:10 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7 (d1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19)\nd1cde348bbaca19c18cb40840495ceaad705b3160bcfa7157f55e24560b06b19\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:10 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:10.272 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[ecad8ccf-dc48-459a-b07b-861889ea4415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:10 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:10.273 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2dab3f7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.275 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:10 compute-0 kernel: tapc2dab3f7-30: left promiscuous mode
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.282 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.283 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:10 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:10.286 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[811b8c90-d565-4fc8-8c5a-5f390b5b825b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:37:10 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:10.308 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[88fccbee-0e4b-4bd6-8336-f85f4ac9f2b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:10 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:10.310 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[4c004fd7-5812-4af9-a504-dcf3e676c604]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:10 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:10.327 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc6c744-ecdd-4b95-b786-f272297f5b4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384909, 'reachable_time': 40131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261364, 'error': None, 'target': 'ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:10 compute-0 systemd[1]: run-netns-ovnmeta\x2dc2dab3f7\x2d3551\x2d4121\x2db4ad\x2de3c2a2b264e7.mount: Deactivated successfully.
Feb 02 11:37:10 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:10.334 165875 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c2dab3f7-3551-4121-b4ad-e3c2a2b264e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 11:37:10 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:10.335 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[a0912552-b56d-4125-b716-bbfd2e7a98c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:37:10 compute-0 podman[261283]: 2026-02-02 11:37:10.381351253 +0000 UTC m=+0.377684221 container init b56ef39d3c3467e7c372ebda90c442c728b8823e811f22f9182c1709659f8d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 11:37:10 compute-0 podman[261283]: 2026-02-02 11:37:10.391948957 +0000 UTC m=+0.388281905 container start b56ef39d3c3467e7c372ebda90c442c728b8823e811f22f9182c1709659f8d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meitner, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:37:10 compute-0 podman[261283]: 2026-02-02 11:37:10.395324433 +0000 UTC m=+0.391657391 container attach b56ef39d3c3467e7c372ebda90c442c728b8823e811f22f9182c1709659f8d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meitner, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:37:10 compute-0 systemd[1]: libpod-b56ef39d3c3467e7c372ebda90c442c728b8823e811f22f9182c1709659f8d9a.scope: Deactivated successfully.
Feb 02 11:37:10 compute-0 keen_meitner[261359]: 167 167
Feb 02 11:37:10 compute-0 conmon[261359]: conmon b56ef39d3c3467e7c372 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b56ef39d3c3467e7c372ebda90c442c728b8823e811f22f9182c1709659f8d9a.scope/container/memory.events
Feb 02 11:37:10 compute-0 podman[261283]: 2026-02-02 11:37:10.398902556 +0000 UTC m=+0.395235504 container died b56ef39d3c3467e7c372ebda90c442c728b8823e811f22f9182c1709659f8d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meitner, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b844a0874d1b82255a6460d505942885e15520ca2ee5c0cd9bee0c50d26de723-merged.mount: Deactivated successfully.
Feb 02 11:37:10 compute-0 podman[261283]: 2026-02-02 11:37:10.448013694 +0000 UTC m=+0.444346642 container remove b56ef39d3c3467e7c372ebda90c442c728b8823e811f22f9182c1709659f8d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb 02 11:37:10 compute-0 systemd[1]: libpod-conmon-b56ef39d3c3467e7c372ebda90c442c728b8823e811f22f9182c1709659f8d9a.scope: Deactivated successfully.
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.499 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:10 compute-0 podman[261385]: 2026-02-02 11:37:10.608033832 +0000 UTC m=+0.041591363 container create bf61cd895629b0c0448112e341b6c2505d18f226003ef1da52b89942c5380440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:37:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 71 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 13 KiB/s wr, 24 op/s
Feb 02 11:37:10 compute-0 systemd[1]: Started libpod-conmon-bf61cd895629b0c0448112e341b6c2505d18f226003ef1da52b89942c5380440.scope.
Feb 02 11:37:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bfd5ab1abe205a9a79fd9fc64fd4763aff2ed97de309fd8d44f681d8ea4e0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bfd5ab1abe205a9a79fd9fc64fd4763aff2ed97de309fd8d44f681d8ea4e0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bfd5ab1abe205a9a79fd9fc64fd4763aff2ed97de309fd8d44f681d8ea4e0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bfd5ab1abe205a9a79fd9fc64fd4763aff2ed97de309fd8d44f681d8ea4e0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:37:10 compute-0 podman[261385]: 2026-02-02 11:37:10.588664337 +0000 UTC m=+0.022221888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:37:10 compute-0 podman[261385]: 2026-02-02 11:37:10.703476139 +0000 UTC m=+0.137033690 container init bf61cd895629b0c0448112e341b6c2505d18f226003ef1da52b89942c5380440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 11:37:10 compute-0 podman[261385]: 2026-02-02 11:37:10.712063545 +0000 UTC m=+0.145621076 container start bf61cd895629b0c0448112e341b6c2505d18f226003ef1da52b89942c5380440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:37:10 compute-0 podman[261385]: 2026-02-02 11:37:10.717324736 +0000 UTC m=+0.150882267 container attach bf61cd895629b0c0448112e341b6c2505d18f226003ef1da52b89942c5380440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.752 251294 INFO nova.virt.libvirt.driver [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Deleting instance files /var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914_del
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.753 251294 INFO nova.virt.libvirt.driver [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Deletion of /var/lib/nova/instances/1cd1ff52-5053-47d8-96b1-171866a19914_del complete
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.868 251294 INFO nova.compute.manager [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Took 1.07 seconds to destroy the instance on the hypervisor.
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.869 251294 DEBUG oslo.service.loopingcall [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.869 251294 DEBUG nova.compute.manager [-] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 11:37:10 compute-0 nova_compute[251290]: 2026-02-02 11:37:10.870 251294 DEBUG nova.network.neutron [-] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 11:37:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:11.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:11.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:11 compute-0 nova_compute[251290]: 2026-02-02 11:37:11.328 251294 DEBUG nova.compute.manager [req-9e8e6ba9-1fa0-4424-b2d8-388772fb1659 req-3d31d557-1c80-4643-9b2e-434d8b1b66ab 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-unplugged-281a7e60-30d1-4ce3-825e-626d8446b90a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:37:11 compute-0 nova_compute[251290]: 2026-02-02 11:37:11.329 251294 DEBUG oslo_concurrency.lockutils [req-9e8e6ba9-1fa0-4424-b2d8-388772fb1659 req-3d31d557-1c80-4643-9b2e-434d8b1b66ab 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:11 compute-0 nova_compute[251290]: 2026-02-02 11:37:11.330 251294 DEBUG oslo_concurrency.lockutils [req-9e8e6ba9-1fa0-4424-b2d8-388772fb1659 req-3d31d557-1c80-4643-9b2e-434d8b1b66ab 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:11 compute-0 nova_compute[251290]: 2026-02-02 11:37:11.331 251294 DEBUG oslo_concurrency.lockutils [req-9e8e6ba9-1fa0-4424-b2d8-388772fb1659 req-3d31d557-1c80-4643-9b2e-434d8b1b66ab 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:11 compute-0 nova_compute[251290]: 2026-02-02 11:37:11.331 251294 DEBUG nova.compute.manager [req-9e8e6ba9-1fa0-4424-b2d8-388772fb1659 req-3d31d557-1c80-4643-9b2e-434d8b1b66ab 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] No waiting events found dispatching network-vif-unplugged-281a7e60-30d1-4ce3-825e-626d8446b90a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:37:11 compute-0 nova_compute[251290]: 2026-02-02 11:37:11.332 251294 DEBUG nova.compute.manager [req-9e8e6ba9-1fa0-4424-b2d8-388772fb1659 req-3d31d557-1c80-4643-9b2e-434d8b1b66ab 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-unplugged-281a7e60-30d1-4ce3-825e-626d8446b90a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 11:37:11 compute-0 lvm[261475]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:37:11 compute-0 lvm[261475]: VG ceph_vg0 finished
Feb 02 11:37:11 compute-0 vibrant_haibt[261401]: {}
Feb 02 11:37:11 compute-0 systemd[1]: libpod-bf61cd895629b0c0448112e341b6c2505d18f226003ef1da52b89942c5380440.scope: Deactivated successfully.
Feb 02 11:37:11 compute-0 systemd[1]: libpod-bf61cd895629b0c0448112e341b6c2505d18f226003ef1da52b89942c5380440.scope: Consumed 1.158s CPU time.
Feb 02 11:37:11 compute-0 podman[261385]: 2026-02-02 11:37:11.476951125 +0000 UTC m=+0.910508656 container died bf61cd895629b0c0448112e341b6c2505d18f226003ef1da52b89942c5380440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb 02 11:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-63bfd5ab1abe205a9a79fd9fc64fd4763aff2ed97de309fd8d44f681d8ea4e0d-merged.mount: Deactivated successfully.
Feb 02 11:37:11 compute-0 podman[261385]: 2026-02-02 11:37:11.526516467 +0000 UTC m=+0.960073998 container remove bf61cd895629b0c0448112e341b6c2505d18f226003ef1da52b89942c5380440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:37:11 compute-0 systemd[1]: libpod-conmon-bf61cd895629b0c0448112e341b6c2505d18f226003ef1da52b89942c5380440.scope: Deactivated successfully.
Feb 02 11:37:11 compute-0 sudo[261182]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:37:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.618422) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032231618469, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 625, "num_deletes": 257, "total_data_size": 775043, "memory_usage": 786984, "flush_reason": "Manual Compaction"}
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032231626284, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 768032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23231, "largest_seqno": 23855, "table_properties": {"data_size": 764694, "index_size": 1182, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7508, "raw_average_key_size": 18, "raw_value_size": 757952, "raw_average_value_size": 1848, "num_data_blocks": 52, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032191, "oldest_key_time": 1770032191, "file_creation_time": 1770032231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 7924 microseconds, and 2538 cpu microseconds.
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.626347) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 768032 bytes OK
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.626372) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.630049) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.630075) EVENT_LOG_v1 {"time_micros": 1770032231630067, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.630111) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 771659, prev total WAL file size 808188, number of live WAL files 2.
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.630573) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(750KB)], [50(12MB)]
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032231630611, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13445037, "oldest_snapshot_seqno": -1}
Feb 02 11:37:11 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:11 compute-0 sudo[261492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:37:11 compute-0 sudo[261492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:11 compute-0 sudo[261492]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5413 keys, 13322809 bytes, temperature: kUnknown
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032231740810, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13322809, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13286617, "index_size": 21554, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138377, "raw_average_key_size": 25, "raw_value_size": 13188679, "raw_average_value_size": 2436, "num_data_blocks": 878, "num_entries": 5413, "num_filter_entries": 5413, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770032231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.741114) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13322809 bytes
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.753075) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.9 rd, 120.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.1 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(34.9) write-amplify(17.3) OK, records in: 5941, records dropped: 528 output_compression: NoCompression
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.753149) EVENT_LOG_v1 {"time_micros": 1770032231753129, "job": 26, "event": "compaction_finished", "compaction_time_micros": 110289, "compaction_time_cpu_micros": 27854, "output_level": 6, "num_output_files": 1, "total_output_size": 13322809, "num_input_records": 5941, "num_output_records": 5413, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032231753578, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032231755329, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.630509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.755435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.755448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.755451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.755453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:37:11 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:37:11.755455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:37:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:12 compute-0 ceph-mon[74676]: pgmap v763: 353 pgs: 353 active+clean; 71 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 13 KiB/s wr, 24 op/s
Feb 02 11:37:12 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:12 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.4 KiB/s wr, 28 op/s
Feb 02 11:37:12 compute-0 nova_compute[251290]: 2026-02-02 11:37:12.671 251294 DEBUG nova.network.neutron [-] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:37:12 compute-0 nova_compute[251290]: 2026-02-02 11:37:12.702 251294 INFO nova.compute.manager [-] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Took 1.83 seconds to deallocate network for instance.
Feb 02 11:37:12 compute-0 nova_compute[251290]: 2026-02-02 11:37:12.778 251294 DEBUG oslo_concurrency.lockutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:12 compute-0 nova_compute[251290]: 2026-02-02 11:37:12.779 251294 DEBUG oslo_concurrency.lockutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:12 compute-0 nova_compute[251290]: 2026-02-02 11:37:12.852 251294 DEBUG oslo_concurrency.processutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:37:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:13.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:13.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:37:13 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/474409036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.330 251294 DEBUG oslo_concurrency.processutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.338 251294 DEBUG nova.compute.provider_tree [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.356 251294 DEBUG nova.scheduler.client.report [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.382 251294 DEBUG oslo_concurrency.lockutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.409 251294 DEBUG nova.compute.manager [req-e59493ad-f101-44e1-a2e6-88bd40c7bf49 req-a58557f4-0981-4dda-8cde-48d52a6ebe21 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-plugged-281a7e60-30d1-4ce3-825e-626d8446b90a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.409 251294 DEBUG oslo_concurrency.lockutils [req-e59493ad-f101-44e1-a2e6-88bd40c7bf49 req-a58557f4-0981-4dda-8cde-48d52a6ebe21 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.410 251294 DEBUG oslo_concurrency.lockutils [req-e59493ad-f101-44e1-a2e6-88bd40c7bf49 req-a58557f4-0981-4dda-8cde-48d52a6ebe21 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.410 251294 DEBUG oslo_concurrency.lockutils [req-e59493ad-f101-44e1-a2e6-88bd40c7bf49 req-a58557f4-0981-4dda-8cde-48d52a6ebe21 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.410 251294 DEBUG nova.compute.manager [req-e59493ad-f101-44e1-a2e6-88bd40c7bf49 req-a58557f4-0981-4dda-8cde-48d52a6ebe21 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] No waiting events found dispatching network-vif-plugged-281a7e60-30d1-4ce3-825e-626d8446b90a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.410 251294 WARNING nova.compute.manager [req-e59493ad-f101-44e1-a2e6-88bd40c7bf49 req-a58557f4-0981-4dda-8cde-48d52a6ebe21 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received unexpected event network-vif-plugged-281a7e60-30d1-4ce3-825e-626d8446b90a for instance with vm_state deleted and task_state None.
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.410 251294 DEBUG nova.compute.manager [req-e59493ad-f101-44e1-a2e6-88bd40c7bf49 req-a58557f4-0981-4dda-8cde-48d52a6ebe21 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Received event network-vif-deleted-281a7e60-30d1-4ce3-825e-626d8446b90a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.413 251294 INFO nova.scheduler.client.report [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Deleted allocations for instance 1cd1ff52-5053-47d8-96b1-171866a19914
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.470 251294 DEBUG nova.network.neutron [req-34b078c9-2844-42d7-9d89-66e80e47f8ee req-aecc9f37-be95-4aa0-b3c0-3546c7c7c6b8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updated VIF entry in instance network info cache for port 281a7e60-30d1-4ce3-825e-626d8446b90a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.471 251294 DEBUG nova.network.neutron [req-34b078c9-2844-42d7-9d89-66e80e47f8ee req-aecc9f37-be95-4aa0-b3c0-3546c7c7c6b8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Updating instance_info_cache with network_info: [{"id": "281a7e60-30d1-4ce3-825e-626d8446b90a", "address": "fa:16:3e:25:9b:d9", "network": {"id": "c2dab3f7-3551-4121-b4ad-e3c2a2b264e7", "bridge": "br-int", "label": "tempest-network-smoke--1424136142", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281a7e60-30", "ovs_interfaceid": "281a7e60-30d1-4ce3-825e-626d8446b90a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.499 251294 DEBUG oslo_concurrency.lockutils [None req-8c516c77-5a9f-4bfd-b94a-470ec120c6c2 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "1cd1ff52-5053-47d8-96b1-171866a19914" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:13 compute-0 nova_compute[251290]: 2026-02-02 11:37:13.500 251294 DEBUG oslo_concurrency.lockutils [req-34b078c9-2844-42d7-9d89-66e80e47f8ee req-aecc9f37-be95-4aa0-b3c0-3546c7c7c6b8 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-1cd1ff52-5053-47d8-96b1-171866a19914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:37:14 compute-0 ceph-mon[74676]: pgmap v764: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.4 KiB/s wr, 28 op/s
Feb 02 11:37:14 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/474409036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Feb 02 11:37:14 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:37:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:37:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.4 KiB/s wr, 28 op/s
Feb 02 11:37:14 compute-0 sshd-session[261540]: Invalid user latitude from 80.94.92.186 port 59326
Feb 02 11:37:15 compute-0 sshd-session[261540]: Connection closed by invalid user latitude 80.94.92.186 port 59326 [preauth]
Feb 02 11:37:15 compute-0 nova_compute[251290]: 2026-02-02 11:37:15.072 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:15.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:15.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:15 compute-0 nova_compute[251290]: 2026-02-02 11:37:15.501 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:37:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:37:15 compute-0 ceph-mon[74676]: pgmap v765: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.4 KiB/s wr, 28 op/s
Feb 02 11:37:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 8.1 KiB/s wr, 34 op/s
Feb 02 11:37:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:16] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Feb 02 11:37:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:16] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Feb 02 11:37:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:37:17.127Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:37:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:37:17.127Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:37:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:17.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:17.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:17 compute-0 ceph-mon[74676]: pgmap v766: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 8.1 KiB/s wr, 34 op/s
Feb 02 11:37:18 compute-0 nova_compute[251290]: 2026-02-02 11:37:18.469 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:18 compute-0 nova_compute[251290]: 2026-02-02 11:37:18.500 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.0 KiB/s wr, 29 op/s
Feb 02 11:37:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:19.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:19.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:19 compute-0 ceph-mon[74676]: pgmap v767: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.0 KiB/s wr, 29 op/s
Feb 02 11:37:20 compute-0 nova_compute[251290]: 2026-02-02 11:37:20.076 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:20 compute-0 nova_compute[251290]: 2026-02-02 11:37:20.503 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.0 KiB/s wr, 29 op/s
Feb 02 11:37:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:21.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:21.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:22 compute-0 ceph-mon[74676]: pgmap v768: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.0 KiB/s wr, 29 op/s
Feb 02 11:37:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 767 B/s wr, 10 op/s
Feb 02 11:37:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:22.672 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:22.672 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:22.672 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:23.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:23.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:24 compute-0 ceph-mon[74676]: pgmap v769: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 767 B/s wr, 10 op/s
Feb 02 11:37:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 681 B/s wr, 7 op/s
Feb 02 11:37:25 compute-0 nova_compute[251290]: 2026-02-02 11:37:25.031 251294 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770032230.0309553, 1cd1ff52-5053-47d8-96b1-171866a19914 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:37:25 compute-0 nova_compute[251290]: 2026-02-02 11:37:25.032 251294 INFO nova.compute.manager [-] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] VM Stopped (Lifecycle Event)
Feb 02 11:37:25 compute-0 nova_compute[251290]: 2026-02-02 11:37:25.078 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:25 compute-0 nova_compute[251290]: 2026-02-02 11:37:25.086 251294 DEBUG nova.compute.manager [None req-4fcb896b-e252-4275-858e-142cb76a21a8 - - - - - -] [instance: 1cd1ff52-5053-47d8-96b1-171866a19914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:37:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:25.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:37:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:25.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:37:25 compute-0 nova_compute[251290]: 2026-02-02 11:37:25.506 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:25 compute-0 sudo[261556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:37:25 compute-0 sudo[261556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:25 compute-0 sudo[261556]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:26 compute-0 ceph-mon[74676]: pgmap v770: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 681 B/s wr, 7 op/s
Feb 02 11:37:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 681 B/s wr, 7 op/s
Feb 02 11:37:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:26] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Feb 02 11:37:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:26] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Feb 02 11:37:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:37:27.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:37:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:27.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:27.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:28 compute-0 ceph-mon[74676]: pgmap v771: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 681 B/s wr, 7 op/s
Feb 02 11:37:28 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/216990153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:28 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2018223626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:37:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:29.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:29.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:37:29
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'vms', '.nfs']
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:37:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:37:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:37:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:37:30 compute-0 nova_compute[251290]: 2026-02-02 11:37:30.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:37:30 compute-0 nova_compute[251290]: 2026-02-02 11:37:30.081 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:30 compute-0 ceph-mon[74676]: pgmap v772: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:37:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:37:30 compute-0 nova_compute[251290]: 2026-02-02 11:37:30.508 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:37:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:31 compute-0 nova_compute[251290]: 2026-02-02 11:37:31.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:37:31 compute-0 nova_compute[251290]: 2026-02-02 11:37:31.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:37:31 compute-0 nova_compute[251290]: 2026-02-02 11:37:31.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:37:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:31.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:31.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:32 compute-0 nova_compute[251290]: 2026-02-02 11:37:32.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:37:32 compute-0 nova_compute[251290]: 2026-02-02 11:37:32.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:37:32 compute-0 ceph-mon[74676]: pgmap v773: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:37:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3856353756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.040 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.040 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.040 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.041 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.041 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:37:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/6596824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:37:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:33.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:37:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:33.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:37:33 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1102019683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.559 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.759 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.761 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4610MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.762 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.762 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.833 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.834 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:37:33 compute-0 nova_compute[251290]: 2026-02-02 11:37:33.852 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:37:34 compute-0 ceph-mon[74676]: pgmap v774: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:37:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1102019683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:37:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726634000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:34 compute-0 nova_compute[251290]: 2026-02-02 11:37:34.334 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:37:34 compute-0 nova_compute[251290]: 2026-02-02 11:37:34.339 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:37:34 compute-0 nova_compute[251290]: 2026-02-02 11:37:34.361 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:37:34 compute-0 nova_compute[251290]: 2026-02-02 11:37:34.388 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:37:34 compute-0 nova_compute[251290]: 2026-02-02 11:37:34.389 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:37:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:37:35 compute-0 nova_compute[251290]: 2026-02-02 11:37:35.085 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/726634000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:35.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:35.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:35 compute-0 nova_compute[251290]: 2026-02-02 11:37:35.389 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:37:35 compute-0 nova_compute[251290]: 2026-02-02 11:37:35.415 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:37:35 compute-0 nova_compute[251290]: 2026-02-02 11:37:35.511 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:36 compute-0 ceph-mon[74676]: pgmap v775: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:37:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:37:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:36] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Feb 02 11:37:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:36] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Feb 02 11:37:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:37:37.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:37:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:37:37.130Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:37:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:37.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:37.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:38 compute-0 nova_compute[251290]: 2026-02-02 11:37:38.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:37:38 compute-0 nova_compute[251290]: 2026-02-02 11:37:38.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:37:38 compute-0 nova_compute[251290]: 2026-02-02 11:37:38.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:37:38 compute-0 nova_compute[251290]: 2026-02-02 11:37:38.047 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:37:38 compute-0 ceph-mon[74676]: pgmap v776: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:37:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:37:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:39.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:39.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2617719132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:37:40 compute-0 nova_compute[251290]: 2026-02-02 11:37:40.089 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:40 compute-0 podman[261640]: 2026-02-02 11:37:40.275620184 +0000 UTC m=+0.059070324 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:37:40 compute-0 ceph-mon[74676]: pgmap v777: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:37:40 compute-0 podman[261641]: 2026-02-02 11:37:40.310583666 +0000 UTC m=+0.094357885 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 02 11:37:40 compute-0 nova_compute[251290]: 2026-02-02 11:37:40.513 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 6 op/s
Feb 02 11:37:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:40.760 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:37:40 compute-0 nova_compute[251290]: 2026-02-02 11:37:40.761 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:40.761 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:37:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:37:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:41.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:37:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:41.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:42 compute-0 ceph-mon[74676]: pgmap v778: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 6 op/s
Feb 02 11:37:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 8 op/s
Feb 02 11:37:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:43.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:43.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:44 compute-0 ceph-mon[74676]: pgmap v779: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 8 op/s
Feb 02 11:37:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2204404768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:37:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2204404768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:37:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:37:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:37:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 7 op/s
Feb 02 11:37:45 compute-0 nova_compute[251290]: 2026-02-02 11:37:45.091 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:45.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:45.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2559098543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:37:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:37:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/266906903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:37:45 compute-0 nova_compute[251290]: 2026-02-02 11:37:45.515 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:45 compute-0 sudo[261690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:37:45 compute-0 sudo[261690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:37:45 compute-0 sudo[261690]: pam_unix(sudo:session): session closed for user root
Feb 02 11:37:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:46 compute-0 ceph-mon[74676]: pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 7 op/s
Feb 02 11:37:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb 02 11:37:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:46] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Feb 02 11:37:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:46] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Feb 02 11:37:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:37:47.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:37:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:37:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:47.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:37:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:47.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:48 compute-0 ceph-mon[74676]: pgmap v781: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb 02 11:37:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 02 11:37:48 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:37:48.763 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:37:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:49.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:49.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:50 compute-0 nova_compute[251290]: 2026-02-02 11:37:50.095 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:50 compute-0 ceph-mon[74676]: pgmap v782: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 02 11:37:50 compute-0 nova_compute[251290]: 2026-02-02 11:37:50.518 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 02 11:37:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:51.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:37:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:51.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:37:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:52 compute-0 ceph-mon[74676]: pgmap v783: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 02 11:37:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 02 11:37:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:53.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:53.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:54 compute-0 ceph-mon[74676]: pgmap v784: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 02 11:37:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 02 11:37:55 compute-0 nova_compute[251290]: 2026-02-02 11:37:55.097 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:55.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:55 compute-0 nova_compute[251290]: 2026-02-02 11:37:55.520 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:37:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:37:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:37:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:37:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:37:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:37:56 compute-0 ceph-mon[74676]: pgmap v785: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 02 11:37:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:37:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:37:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:56] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Feb 02 11:37:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:37:56] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Feb 02 11:37:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:37:57.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:37:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:37:57.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:37:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:37:57.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:37:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:37:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:57.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:37:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:57.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:58 compute-0 ovn_controller[154901]: 2026-02-02T11:37:58Z|00057|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Feb 02 11:37:58 compute-0 ceph-mon[74676]: pgmap v786: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:37:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 02 11:37:58 compute-0 ceph-mgr[74969]: [dashboard INFO request] [192.168.122.100:37350] [POST] [200] [0.002s] [4.0B] [2647548a-378c-45d2-9194-1efbed1a1262] /api/prometheus_receiver
Feb 02 11:37:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:37:59.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:37:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:37:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:37:59.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:37:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:37:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:37:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:37:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:37:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:37:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:37:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:37:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:38:00 compute-0 nova_compute[251290]: 2026-02-02 11:38:00.099 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:00 compute-0 nova_compute[251290]: 2026-02-02 11:38:00.523 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:00 compute-0 ceph-mon[74676]: pgmap v787: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 02 11:38:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:38:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 104 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 940 KiB/s wr, 115 op/s
Feb 02 11:38:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:01.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:01.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:01 compute-0 ceph-mon[74676]: pgmap v788: 353 pgs: 353 active+clean; 104 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 940 KiB/s wr, 115 op/s
Feb 02 11:38:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Feb 02 11:38:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:38:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:03.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:38:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:38:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:03.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:38:03 compute-0 ceph-mon[74676]: pgmap v789: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Feb 02 11:38:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:38:05 compute-0 nova_compute[251290]: 2026-02-02 11:38:05.101 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:38:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:05.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:38:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:05.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:05 compute-0 nova_compute[251290]: 2026-02-02 11:38:05.524 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:05 compute-0 ceph-mon[74676]: pgmap v790: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:38:05 compute-0 sudo[261735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:38:05 compute-0 sudo[261735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:05 compute-0 sudo[261735]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:38:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:06] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:38:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:06] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:38:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:07.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:38:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:07.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:38:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:38:07 compute-0 ceph-mon[74676]: pgmap v791: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:38:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:38:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:08.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:38:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:38:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:09.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:38:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:09.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:09 compute-0 ceph-mon[74676]: pgmap v792: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:38:10 compute-0 nova_compute[251290]: 2026-02-02 11:38:10.103 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:10 compute-0 nova_compute[251290]: 2026-02-02 11:38:10.526 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:38:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:38:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:11.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:38:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:11.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:11 compute-0 podman[261766]: 2026-02-02 11:38:11.295855886 +0000 UTC m=+0.085440579 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true)
Feb 02 11:38:11 compute-0 podman[261767]: 2026-02-02 11:38:11.302910808 +0000 UTC m=+0.089240428 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 11:38:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:11 compute-0 ceph-mon[74676]: pgmap v793: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:38:11 compute-0 sudo[261810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:38:11 compute-0 sudo[261810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:11 compute-0 sudo[261810]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:11 compute-0 sudo[261835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:38:11 compute-0 sudo[261835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:12 compute-0 sudo[261835]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:38:12 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:38:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:38:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:38:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 163 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Feb 02 11:38:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:38:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:38:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:38:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:38:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:38:12 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:38:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:38:12 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:38:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:38:12 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:38:12 compute-0 sudo[261891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:38:12 compute-0 sudo[261891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:12 compute-0 sudo[261891]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:12 compute-0 sudo[261916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:38:12 compute-0 sudo[261916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:12 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:38:12 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:38:12 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:38:12 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:38:12 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:38:12 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:38:12 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:38:13 compute-0 podman[261983]: 2026-02-02 11:38:13.051299391 +0000 UTC m=+0.042511579 container create 3277abbd9138fe91b57a443d998f2bd3d97ce62cbeb035ec98565ef5aab87aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:38:13 compute-0 systemd[1]: Started libpod-conmon-3277abbd9138fe91b57a443d998f2bd3d97ce62cbeb035ec98565ef5aab87aec.scope.
Feb 02 11:38:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:38:13 compute-0 podman[261983]: 2026-02-02 11:38:13.033952504 +0000 UTC m=+0.025164712 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:38:13 compute-0 podman[261983]: 2026-02-02 11:38:13.138887441 +0000 UTC m=+0.130099649 container init 3277abbd9138fe91b57a443d998f2bd3d97ce62cbeb035ec98565ef5aab87aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:38:13 compute-0 podman[261983]: 2026-02-02 11:38:13.146856849 +0000 UTC m=+0.138069057 container start 3277abbd9138fe91b57a443d998f2bd3d97ce62cbeb035ec98565ef5aab87aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 11:38:13 compute-0 podman[261983]: 2026-02-02 11:38:13.150213165 +0000 UTC m=+0.141425353 container attach 3277abbd9138fe91b57a443d998f2bd3d97ce62cbeb035ec98565ef5aab87aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 11:38:13 compute-0 competent_lamport[261999]: 167 167
Feb 02 11:38:13 compute-0 systemd[1]: libpod-3277abbd9138fe91b57a443d998f2bd3d97ce62cbeb035ec98565ef5aab87aec.scope: Deactivated successfully.
Feb 02 11:38:13 compute-0 podman[261983]: 2026-02-02 11:38:13.152515041 +0000 UTC m=+0.143727239 container died 3277abbd9138fe91b57a443d998f2bd3d97ce62cbeb035ec98565ef5aab87aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbe9f1161676d5c2daa6d0250bb51dccf55ce98af8fbc284648cf410a0beb6e7-merged.mount: Deactivated successfully.
Feb 02 11:38:13 compute-0 podman[261983]: 2026-02-02 11:38:13.193905897 +0000 UTC m=+0.185118105 container remove 3277abbd9138fe91b57a443d998f2bd3d97ce62cbeb035ec98565ef5aab87aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:38:13 compute-0 systemd[1]: libpod-conmon-3277abbd9138fe91b57a443d998f2bd3d97ce62cbeb035ec98565ef5aab87aec.scope: Deactivated successfully.
Feb 02 11:38:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:38:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:13.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:38:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:13.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:13 compute-0 podman[262023]: 2026-02-02 11:38:13.32418453 +0000 UTC m=+0.049506400 container create a858f570d21360c284f6d70bbea7f1aa6cb9bd18cc3f8c298df5eeafbe1b4cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Feb 02 11:38:13 compute-0 systemd[1]: Started libpod-conmon-a858f570d21360c284f6d70bbea7f1aa6cb9bd18cc3f8c298df5eeafbe1b4cb8.scope.
Feb 02 11:38:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650cb4c1df9d3a2689332483e67ef01e224033ba1ab8b3b80195392fcd392691/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650cb4c1df9d3a2689332483e67ef01e224033ba1ab8b3b80195392fcd392691/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650cb4c1df9d3a2689332483e67ef01e224033ba1ab8b3b80195392fcd392691/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650cb4c1df9d3a2689332483e67ef01e224033ba1ab8b3b80195392fcd392691/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650cb4c1df9d3a2689332483e67ef01e224033ba1ab8b3b80195392fcd392691/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:13 compute-0 podman[262023]: 2026-02-02 11:38:13.299538003 +0000 UTC m=+0.024859893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:38:13 compute-0 podman[262023]: 2026-02-02 11:38:13.407071705 +0000 UTC m=+0.132393615 container init a858f570d21360c284f6d70bbea7f1aa6cb9bd18cc3f8c298df5eeafbe1b4cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_dirac, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:38:13 compute-0 podman[262023]: 2026-02-02 11:38:13.414734354 +0000 UTC m=+0.140056224 container start a858f570d21360c284f6d70bbea7f1aa6cb9bd18cc3f8c298df5eeafbe1b4cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Feb 02 11:38:13 compute-0 podman[262023]: 2026-02-02 11:38:13.418829441 +0000 UTC m=+0.144151341 container attach a858f570d21360c284f6d70bbea7f1aa6cb9bd18cc3f8c298df5eeafbe1b4cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:38:13 compute-0 affectionate_dirac[262040]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:38:13 compute-0 affectionate_dirac[262040]: --> All data devices are unavailable
Feb 02 11:38:13 compute-0 systemd[1]: libpod-a858f570d21360c284f6d70bbea7f1aa6cb9bd18cc3f8c298df5eeafbe1b4cb8.scope: Deactivated successfully.
Feb 02 11:38:13 compute-0 podman[262023]: 2026-02-02 11:38:13.74895016 +0000 UTC m=+0.474272050 container died a858f570d21360c284f6d70bbea7f1aa6cb9bd18cc3f8c298df5eeafbe1b4cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_dirac, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-650cb4c1df9d3a2689332483e67ef01e224033ba1ab8b3b80195392fcd392691-merged.mount: Deactivated successfully.
Feb 02 11:38:13 compute-0 podman[262023]: 2026-02-02 11:38:13.798259612 +0000 UTC m=+0.523581482 container remove a858f570d21360c284f6d70bbea7f1aa6cb9bd18cc3f8c298df5eeafbe1b4cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_dirac, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:38:13 compute-0 systemd[1]: libpod-conmon-a858f570d21360c284f6d70bbea7f1aa6cb9bd18cc3f8c298df5eeafbe1b4cb8.scope: Deactivated successfully.
Feb 02 11:38:13 compute-0 ceph-mon[74676]: pgmap v794: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 163 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Feb 02 11:38:13 compute-0 sudo[261916]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:13 compute-0 sudo[262068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:38:13 compute-0 sudo[262068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:13 compute-0 sudo[262068]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:13 compute-0 sudo[262093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:38:13 compute-0 sudo[262093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:14 compute-0 podman[262159]: 2026-02-02 11:38:14.343256787 +0000 UTC m=+0.038620148 container create a783e1c542cdf374594e468ee153ef32dfe082e71c7476f866d59bb95797979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moore, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:38:14 compute-0 systemd[1]: Started libpod-conmon-a783e1c542cdf374594e468ee153ef32dfe082e71c7476f866d59bb95797979e.scope.
Feb 02 11:38:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:38:14 compute-0 podman[262159]: 2026-02-02 11:38:14.327777343 +0000 UTC m=+0.023140714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:38:14 compute-0 podman[262159]: 2026-02-02 11:38:14.436470487 +0000 UTC m=+0.131833868 container init a783e1c542cdf374594e468ee153ef32dfe082e71c7476f866d59bb95797979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:38:14 compute-0 podman[262159]: 2026-02-02 11:38:14.443074527 +0000 UTC m=+0.138437898 container start a783e1c542cdf374594e468ee153ef32dfe082e71c7476f866d59bb95797979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moore, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:38:14 compute-0 podman[262159]: 2026-02-02 11:38:14.446759602 +0000 UTC m=+0.142122973 container attach a783e1c542cdf374594e468ee153ef32dfe082e71c7476f866d59bb95797979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moore, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:38:14 compute-0 silly_moore[262176]: 167 167
Feb 02 11:38:14 compute-0 systemd[1]: libpod-a783e1c542cdf374594e468ee153ef32dfe082e71c7476f866d59bb95797979e.scope: Deactivated successfully.
Feb 02 11:38:14 compute-0 podman[262159]: 2026-02-02 11:38:14.450428797 +0000 UTC m=+0.145792158 container died a783e1c542cdf374594e468ee153ef32dfe082e71c7476f866d59bb95797979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2367a7ff8bd5b01ab5a50505e1191beb5a050fb10ca9b7e605ce626c617a481-merged.mount: Deactivated successfully.
Feb 02 11:38:14 compute-0 podman[262159]: 2026-02-02 11:38:14.48819477 +0000 UTC m=+0.183558131 container remove a783e1c542cdf374594e468ee153ef32dfe082e71c7476f866d59bb95797979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moore, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:38:14 compute-0 systemd[1]: libpod-conmon-a783e1c542cdf374594e468ee153ef32dfe082e71c7476f866d59bb95797979e.scope: Deactivated successfully.
Feb 02 11:38:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 11 KiB/s wr, 1 op/s
Feb 02 11:38:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:38:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:38:14 compute-0 podman[262200]: 2026-02-02 11:38:14.615364993 +0000 UTC m=+0.040416839 container create e78a3e8ed66f04b2c369a5c40a5337ae6f8baefd2a2ee50094f03cd6906c8718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:38:14 compute-0 systemd[1]: Started libpod-conmon-e78a3e8ed66f04b2c369a5c40a5337ae6f8baefd2a2ee50094f03cd6906c8718.scope.
Feb 02 11:38:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d15746ebf1e216e68aa2191706863fa3fbc76cc23dea9c7d0467b164a0597d43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d15746ebf1e216e68aa2191706863fa3fbc76cc23dea9c7d0467b164a0597d43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d15746ebf1e216e68aa2191706863fa3fbc76cc23dea9c7d0467b164a0597d43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d15746ebf1e216e68aa2191706863fa3fbc76cc23dea9c7d0467b164a0597d43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:14 compute-0 podman[262200]: 2026-02-02 11:38:14.597060299 +0000 UTC m=+0.022112165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:38:14 compute-0 podman[262200]: 2026-02-02 11:38:14.69761234 +0000 UTC m=+0.122664216 container init e78a3e8ed66f04b2c369a5c40a5337ae6f8baefd2a2ee50094f03cd6906c8718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:38:14 compute-0 podman[262200]: 2026-02-02 11:38:14.704073265 +0000 UTC m=+0.129125111 container start e78a3e8ed66f04b2c369a5c40a5337ae6f8baefd2a2ee50094f03cd6906c8718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:38:14 compute-0 podman[262200]: 2026-02-02 11:38:14.712014073 +0000 UTC m=+0.137065909 container attach e78a3e8ed66f04b2c369a5c40a5337ae6f8baefd2a2ee50094f03cd6906c8718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:38:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:38:14 compute-0 cool_mclaren[262216]: {
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:     "1": [
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:         {
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "devices": [
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "/dev/loop3"
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             ],
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "lv_name": "ceph_lv0",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "lv_size": "21470642176",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "name": "ceph_lv0",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "tags": {
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.cluster_name": "ceph",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.crush_device_class": "",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.encrypted": "0",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.osd_id": "1",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.type": "block",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.vdo": "0",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:                 "ceph.with_tpm": "0"
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             },
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "type": "block",
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:             "vg_name": "ceph_vg0"
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:         }
Feb 02 11:38:14 compute-0 cool_mclaren[262216]:     ]
Feb 02 11:38:14 compute-0 cool_mclaren[262216]: }
Feb 02 11:38:14 compute-0 systemd[1]: libpod-e78a3e8ed66f04b2c369a5c40a5337ae6f8baefd2a2ee50094f03cd6906c8718.scope: Deactivated successfully.
Feb 02 11:38:14 compute-0 podman[262200]: 2026-02-02 11:38:14.968890262 +0000 UTC m=+0.393942108 container died e78a3e8ed66f04b2c369a5c40a5337ae6f8baefd2a2ee50094f03cd6906c8718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 11:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d15746ebf1e216e68aa2191706863fa3fbc76cc23dea9c7d0467b164a0597d43-merged.mount: Deactivated successfully.
Feb 02 11:38:15 compute-0 podman[262200]: 2026-02-02 11:38:15.008032444 +0000 UTC m=+0.433084290 container remove e78a3e8ed66f04b2c369a5c40a5337ae6f8baefd2a2ee50094f03cd6906c8718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:38:15 compute-0 systemd[1]: libpod-conmon-e78a3e8ed66f04b2c369a5c40a5337ae6f8baefd2a2ee50094f03cd6906c8718.scope: Deactivated successfully.
Feb 02 11:38:15 compute-0 sudo[262093]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:15 compute-0 nova_compute[251290]: 2026-02-02 11:38:15.104 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:15 compute-0 sudo[262239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:38:15 compute-0 sudo[262239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:15 compute-0 sudo[262239]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:15 compute-0 sudo[262264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:38:15 compute-0 sudo[262264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:15.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:15.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:15 compute-0 nova_compute[251290]: 2026-02-02 11:38:15.527 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:15 compute-0 podman[262329]: 2026-02-02 11:38:15.535260398 +0000 UTC m=+0.041872029 container create da2a1785b76e1bdeee9bf5377edeae9c79806c5be15761cbb39539f71aba32ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_keldysh, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:38:15 compute-0 systemd[1]: Started libpod-conmon-da2a1785b76e1bdeee9bf5377edeae9c79806c5be15761cbb39539f71aba32ba.scope.
Feb 02 11:38:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:38:15 compute-0 podman[262329]: 2026-02-02 11:38:15.516139061 +0000 UTC m=+0.022750712 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:38:15 compute-0 podman[262329]: 2026-02-02 11:38:15.627238944 +0000 UTC m=+0.133850595 container init da2a1785b76e1bdeee9bf5377edeae9c79806c5be15761cbb39539f71aba32ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb 02 11:38:15 compute-0 podman[262329]: 2026-02-02 11:38:15.632777572 +0000 UTC m=+0.139389203 container start da2a1785b76e1bdeee9bf5377edeae9c79806c5be15761cbb39539f71aba32ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_keldysh, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:38:15 compute-0 podman[262329]: 2026-02-02 11:38:15.637097406 +0000 UTC m=+0.143709037 container attach da2a1785b76e1bdeee9bf5377edeae9c79806c5be15761cbb39539f71aba32ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_keldysh, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:38:15 compute-0 gifted_keldysh[262345]: 167 167
Feb 02 11:38:15 compute-0 systemd[1]: libpod-da2a1785b76e1bdeee9bf5377edeae9c79806c5be15761cbb39539f71aba32ba.scope: Deactivated successfully.
Feb 02 11:38:15 compute-0 podman[262329]: 2026-02-02 11:38:15.639567877 +0000 UTC m=+0.146179508 container died da2a1785b76e1bdeee9bf5377edeae9c79806c5be15761cbb39539f71aba32ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_keldysh, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b49f1588cf2099ecf980e8b07441a7ce3e24ef14b58d77bfd572d4155b2a5c5-merged.mount: Deactivated successfully.
Feb 02 11:38:15 compute-0 podman[262329]: 2026-02-02 11:38:15.674034575 +0000 UTC m=+0.180646216 container remove da2a1785b76e1bdeee9bf5377edeae9c79806c5be15761cbb39539f71aba32ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_keldysh, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:38:15 compute-0 systemd[1]: libpod-conmon-da2a1785b76e1bdeee9bf5377edeae9c79806c5be15761cbb39539f71aba32ba.scope: Deactivated successfully.
Feb 02 11:38:15 compute-0 podman[262369]: 2026-02-02 11:38:15.803918966 +0000 UTC m=+0.041072118 container create 1685f39ca27918ecf08aa4cfcbda0be87ed40ad7e8bd5504ea0239ac09f0586b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_burnell, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Feb 02 11:38:15 compute-0 systemd[1]: Started libpod-conmon-1685f39ca27918ecf08aa4cfcbda0be87ed40ad7e8bd5504ea0239ac09f0586b.scope.
Feb 02 11:38:15 compute-0 ceph-mon[74676]: pgmap v795: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 11 KiB/s wr, 1 op/s
Feb 02 11:38:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d25783d2f7bd54276d6645815c89574df9be10f41302379fd9732db538a6ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d25783d2f7bd54276d6645815c89574df9be10f41302379fd9732db538a6ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d25783d2f7bd54276d6645815c89574df9be10f41302379fd9732db538a6ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d25783d2f7bd54276d6645815c89574df9be10f41302379fd9732db538a6ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:38:15 compute-0 podman[262369]: 2026-02-02 11:38:15.786703923 +0000 UTC m=+0.023857095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:38:15 compute-0 podman[262369]: 2026-02-02 11:38:15.894085559 +0000 UTC m=+0.131238711 container init 1685f39ca27918ecf08aa4cfcbda0be87ed40ad7e8bd5504ea0239ac09f0586b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:38:15 compute-0 podman[262369]: 2026-02-02 11:38:15.901051699 +0000 UTC m=+0.138204841 container start 1685f39ca27918ecf08aa4cfcbda0be87ed40ad7e8bd5504ea0239ac09f0586b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:38:15 compute-0 podman[262369]: 2026-02-02 11:38:15.904455276 +0000 UTC m=+0.141608418 container attach 1685f39ca27918ecf08aa4cfcbda0be87ed40ad7e8bd5504ea0239ac09f0586b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb 02 11:38:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 17 KiB/s wr, 2 op/s
Feb 02 11:38:16 compute-0 lvm[262460]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:38:16 compute-0 lvm[262460]: VG ceph_vg0 finished
Feb 02 11:38:16 compute-0 dreamy_burnell[262386]: {}
Feb 02 11:38:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:16 compute-0 systemd[1]: libpod-1685f39ca27918ecf08aa4cfcbda0be87ed40ad7e8bd5504ea0239ac09f0586b.scope: Deactivated successfully.
Feb 02 11:38:16 compute-0 systemd[1]: libpod-1685f39ca27918ecf08aa4cfcbda0be87ed40ad7e8bd5504ea0239ac09f0586b.scope: Consumed 1.124s CPU time.
Feb 02 11:38:16 compute-0 podman[262369]: 2026-02-02 11:38:16.649161353 +0000 UTC m=+0.886314495 container died 1685f39ca27918ecf08aa4cfcbda0be87ed40ad7e8bd5504ea0239ac09f0586b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_burnell, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-06d25783d2f7bd54276d6645815c89574df9be10f41302379fd9732db538a6ef-merged.mount: Deactivated successfully.
Feb 02 11:38:16 compute-0 podman[262369]: 2026-02-02 11:38:16.726243012 +0000 UTC m=+0.963396154 container remove 1685f39ca27918ecf08aa4cfcbda0be87ed40ad7e8bd5504ea0239ac09f0586b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 11:38:16 compute-0 systemd[1]: libpod-conmon-1685f39ca27918ecf08aa4cfcbda0be87ed40ad7e8bd5504ea0239ac09f0586b.scope: Deactivated successfully.
Feb 02 11:38:16 compute-0 sudo[262264]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:38:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:38:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:38:16 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:38:16 compute-0 sudo[262474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:38:16 compute-0 sudo[262474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:16 compute-0 sudo[262474]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:16] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:38:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:16] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:38:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:17.134Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:38:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:17.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:17 compute-0 ceph-mon[74676]: pgmap v796: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 17 KiB/s wr, 2 op/s
Feb 02 11:38:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:38:17 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:38:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 17 KiB/s wr, 2 op/s
Feb 02 11:38:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:18.845Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:38:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:18.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:38:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000057s ======
Feb 02 11:38:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:19.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Feb 02 11:38:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:38:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:19.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:38:19 compute-0 ceph-mon[74676]: pgmap v797: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 17 KiB/s wr, 2 op/s
Feb 02 11:38:19 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/943091793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:20 compute-0 nova_compute[251290]: 2026-02-02 11:38:20.134 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 17 KiB/s wr, 2 op/s
Feb 02 11:38:20 compute-0 nova_compute[251290]: 2026-02-02 11:38:20.531 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:21.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:21.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:21 compute-0 ceph-mon[74676]: pgmap v798: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 17 KiB/s wr, 2 op/s
Feb 02 11:38:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 02 11:38:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:38:22.673 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:38:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:38:22.674 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:38:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:38:22.674 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:22.891478) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032302891523, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 877, "num_deletes": 251, "total_data_size": 1441282, "memory_usage": 1471048, "flush_reason": "Manual Compaction"}
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032302909217, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1405306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23856, "largest_seqno": 24732, "table_properties": {"data_size": 1400937, "index_size": 2021, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9934, "raw_average_key_size": 19, "raw_value_size": 1392050, "raw_average_value_size": 2773, "num_data_blocks": 90, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032231, "oldest_key_time": 1770032231, "file_creation_time": 1770032302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 17812 microseconds, and 3935 cpu microseconds.
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:22.909285) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1405306 bytes OK
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:22.909313) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:22.911003) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:22.911087) EVENT_LOG_v1 {"time_micros": 1770032302911066, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:22.911129) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1437016, prev total WAL file size 1437016, number of live WAL files 2.
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:22.911889) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1372KB)], [53(12MB)]
Feb 02 11:38:22 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032302911960, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14728115, "oldest_snapshot_seqno": -1}
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5398 keys, 12571808 bytes, temperature: kUnknown
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032303032098, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12571808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12536413, "index_size": 20821, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 138747, "raw_average_key_size": 25, "raw_value_size": 12439314, "raw_average_value_size": 2304, "num_data_blocks": 844, "num_entries": 5398, "num_filter_entries": 5398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770032302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:23.032394) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12571808 bytes
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:23.033507) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.5 rd, 104.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(19.4) write-amplify(8.9) OK, records in: 5915, records dropped: 517 output_compression: NoCompression
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:23.033527) EVENT_LOG_v1 {"time_micros": 1770032303033517, "job": 28, "event": "compaction_finished", "compaction_time_micros": 120224, "compaction_time_cpu_micros": 22830, "output_level": 6, "num_output_files": 1, "total_output_size": 12571808, "num_input_records": 5915, "num_output_records": 5398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032303033789, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032303036282, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:22.911772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:23.036347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:23.036354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:23.036356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:23.036357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:38:23 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:38:23.036359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:38:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:23.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:38:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:23.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:38:23 compute-0 ceph-mon[74676]: pgmap v799: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 02 11:38:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:38:25 compute-0 nova_compute[251290]: 2026-02-02 11:38:25.136 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:38:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:38:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:25.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:25 compute-0 nova_compute[251290]: 2026-02-02 11:38:25.533 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:25 compute-0 ceph-mon[74676]: pgmap v800: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:38:25 compute-0 sudo[262509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:38:25 compute-0 sudo[262509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:25 compute-0 sudo[262509]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:38:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:26 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2414332668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:38:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:26] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:38:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:26] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:38:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:27.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:38:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:27.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:27.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:27 compute-0 ceph-mon[74676]: pgmap v801: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:38:27 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3990723522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:38:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:38:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:28.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:38:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:28.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:38:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:29.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:38:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:29.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:38:29
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.meta', 'backups', 'volumes', '.nfs', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'default.rgw.control']
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:38:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:38:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001105905999706974 of space, bias 1.0, pg target 0.3317717999120922 quantized to 32 (current 32)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:38:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:38:29 compute-0 ceph-mon[74676]: pgmap v802: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:38:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:38:29 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/310368399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:38:30 compute-0 nova_compute[251290]: 2026-02-02 11:38:30.138 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:38:30 compute-0 nova_compute[251290]: 2026-02-02 11:38:30.535 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:30 compute-0 ceph-mon[74676]: pgmap v803: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:38:30 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1722871235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:31 compute-0 nova_compute[251290]: 2026-02-02 11:38:31.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:38:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:31.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:31.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3846624158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:32 compute-0 nova_compute[251290]: 2026-02-02 11:38:32.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:38:32 compute-0 nova_compute[251290]: 2026-02-02 11:38:32.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:38:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 02 11:38:32 compute-0 ceph-mon[74676]: pgmap v804: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 02 11:38:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/442912798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.046 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.046 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.047 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.047 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.047 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:38:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:33.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:33.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:38:33 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355551418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.495 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.643 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.644 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4620MB free_disk=59.921791076660156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.644 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.645 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.709 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.710 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:38:33 compute-0 nova_compute[251290]: 2026-02-02 11:38:33.738 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:38:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1355551418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3183620249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:38:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3628903979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:34 compute-0 nova_compute[251290]: 2026-02-02 11:38:34.209 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:38:34 compute-0 nova_compute[251290]: 2026-02-02 11:38:34.217 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:38:34 compute-0 nova_compute[251290]: 2026-02-02 11:38:34.268 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:38:34 compute-0 nova_compute[251290]: 2026-02-02 11:38:34.270 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:38:34 compute-0 nova_compute[251290]: 2026-02-02 11:38:34.271 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:38:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 15 KiB/s wr, 61 op/s
Feb 02 11:38:34 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 02 11:38:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3628903979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:35 compute-0 ceph-mon[74676]: pgmap v805: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 15 KiB/s wr, 61 op/s
Feb 02 11:38:35 compute-0 nova_compute[251290]: 2026-02-02 11:38:35.140 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:35.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:35 compute-0 nova_compute[251290]: 2026-02-02 11:38:35.537 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:36 compute-0 nova_compute[251290]: 2026-02-02 11:38:36.266 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:38:36 compute-0 nova_compute[251290]: 2026-02-02 11:38:36.266 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:38:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Feb 02 11:38:36 compute-0 ceph-mon[74676]: pgmap v806: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Feb 02 11:38:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:36] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Feb 02 11:38:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:36] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Feb 02 11:38:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:37.137Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:38:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:37.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:37.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Feb 02 11:38:38 compute-0 ceph-mon[74676]: pgmap v807: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Feb 02 11:38:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:38.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:38:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:39.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:39.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:40 compute-0 nova_compute[251290]: 2026-02-02 11:38:40.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:38:40 compute-0 nova_compute[251290]: 2026-02-02 11:38:40.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:38:40 compute-0 nova_compute[251290]: 2026-02-02 11:38:40.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:38:40 compute-0 nova_compute[251290]: 2026-02-02 11:38:40.034 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:38:40 compute-0 nova_compute[251290]: 2026-02-02 11:38:40.142 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Feb 02 11:38:40 compute-0 nova_compute[251290]: 2026-02-02 11:38:40.540 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:40 compute-0 ceph-mon[74676]: pgmap v808: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Feb 02 11:38:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:41.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:41.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:42 compute-0 podman[262595]: 2026-02-02 11:38:42.275616008 +0000 UTC m=+0.058440266 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 02 11:38:42 compute-0 podman[262596]: 2026-02-02 11:38:42.306909904 +0000 UTC m=+0.088968850 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:38:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 188 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Feb 02 11:38:42 compute-0 ceph-mon[74676]: pgmap v809: 353 pgs: 353 active+clean; 188 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Feb 02 11:38:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:43.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:43.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/392651657' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:38:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/392651657' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:38:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 188 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 608 KiB/s rd, 2.0 MiB/s wr, 48 op/s
Feb 02 11:38:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:38:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:38:45 compute-0 ceph-mon[74676]: pgmap v810: 353 pgs: 353 active+clean; 188 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 608 KiB/s rd, 2.0 MiB/s wr, 48 op/s
Feb 02 11:38:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:38:45 compute-0 nova_compute[251290]: 2026-02-02 11:38:45.144 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:38:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:45.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:38:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:45.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:45 compute-0 nova_compute[251290]: 2026-02-02 11:38:45.541 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:46 compute-0 sudo[262643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:38:46 compute-0 sudo[262643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:38:46 compute-0 sudo[262643]: pam_unix(sudo:session): session closed for user root
Feb 02 11:38:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 795 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Feb 02 11:38:46 compute-0 ceph-mon[74676]: pgmap v811: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 795 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Feb 02 11:38:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:46] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Feb 02 11:38:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:46] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Feb 02 11:38:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:47.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:38:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:47.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:47.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:38:48 compute-0 ceph-mon[74676]: pgmap v812: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:38:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:48.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:38:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:48.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:38:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:49.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:49.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:50 compute-0 nova_compute[251290]: 2026-02-02 11:38:50.146 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:50 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:38:50.306 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:38:50 compute-0 nova_compute[251290]: 2026-02-02 11:38:50.307 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:50 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:38:50.308 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:38:50 compute-0 nova_compute[251290]: 2026-02-02 11:38:50.543 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:38:50 compute-0 ceph-mon[74676]: pgmap v813: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:38:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:51.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:51.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 145 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Feb 02 11:38:52 compute-0 ceph-mon[74676]: pgmap v814: 353 pgs: 353 active+clean; 145 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Feb 02 11:38:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:38:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:53.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:38:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:38:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:53.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:38:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4150381266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 145 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 108 KiB/s wr, 48 op/s
Feb 02 11:38:54 compute-0 ceph-mon[74676]: pgmap v815: 353 pgs: 353 active+clean; 145 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 108 KiB/s wr, 48 op/s
Feb 02 11:38:55 compute-0 nova_compute[251290]: 2026-02-02 11:38:55.148 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:55.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:38:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:55.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:38:55 compute-0 nova_compute[251290]: 2026-02-02 11:38:55.545 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:38:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:38:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:38:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:38:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:38:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:38:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 108 KiB/s wr, 65 op/s
Feb 02 11:38:56 compute-0 ceph-mon[74676]: pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 108 KiB/s wr, 65 op/s
Feb 02 11:38:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:38:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:56] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Feb 02 11:38:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:38:56] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Feb 02 11:38:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:57.140Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:38:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:57.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:57.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1036864635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:38:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 13 KiB/s wr, 32 op/s
Feb 02 11:38:58 compute-0 ceph-mon[74676]: pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 13 KiB/s wr, 32 op/s
Feb 02 11:38:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:38:58.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:38:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:38:59.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:38:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:38:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:38:59.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:38:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:38:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:38:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:38:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:38:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:38:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:38:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:38:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:38:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:39:00 compute-0 nova_compute[251290]: 2026-02-02 11:39:00.151 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:00.310 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:00 compute-0 nova_compute[251290]: 2026-02-02 11:39:00.547 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 13 KiB/s wr, 32 op/s
Feb 02 11:39:00 compute-0 ceph-mon[74676]: pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 13 KiB/s wr, 32 op/s
Feb 02 11:39:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:01.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:01.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 57 op/s
Feb 02 11:39:02 compute-0 ceph-mon[74676]: pgmap v819: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 57 op/s
Feb 02 11:39:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:39:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:03.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:39:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:03.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Feb 02 11:39:04 compute-0 ceph-mon[74676]: pgmap v820: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Feb 02 11:39:05 compute-0 nova_compute[251290]: 2026-02-02 11:39:05.154 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:39:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:05.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:39:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:05.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:05 compute-0 nova_compute[251290]: 2026-02-02 11:39:05.549 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:06 compute-0 sudo[262689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:39:06 compute-0 sudo[262689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:06 compute-0 sudo[262689]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Feb 02 11:39:06 compute-0 ceph-mon[74676]: pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Feb 02 11:39:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:06] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:39:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:06] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:39:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:07.141Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:39:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:07.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:39:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:07.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 02 11:39:08 compute-0 ceph-mon[74676]: pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 02 11:39:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:08.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:39:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:09.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:10 compute-0 nova_compute[251290]: 2026-02-02 11:39:10.156 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:10 compute-0 nova_compute[251290]: 2026-02-02 11:39:10.551 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 02 11:39:10 compute-0 ceph-mon[74676]: pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 02 11:39:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:11.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 02 11:39:12 compute-0 ceph-mon[74676]: pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 02 11:39:12 compute-0 nova_compute[251290]: 2026-02-02 11:39:12.891 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:12 compute-0 nova_compute[251290]: 2026-02-02 11:39:12.891 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:12 compute-0 nova_compute[251290]: 2026-02-02 11:39:12.905 251294 DEBUG nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 11:39:12 compute-0 nova_compute[251290]: 2026-02-02 11:39:12.985 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:12 compute-0 nova_compute[251290]: 2026-02-02 11:39:12.986 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:12 compute-0 nova_compute[251290]: 2026-02-02 11:39:12.992 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 11:39:12 compute-0 nova_compute[251290]: 2026-02-02 11:39:12.993 251294 INFO nova.compute.claims [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Claim successful on node compute-0.ctlplane.example.com
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.101 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:39:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:13.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:13 compute-0 podman[262722]: 2026-02-02 11:39:13.323816348 +0000 UTC m=+0.111654780 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 11:39:13 compute-0 podman[262723]: 2026-02-02 11:39:13.337538751 +0000 UTC m=+0.122963434 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 11:39:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:39:13 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3691005373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.603 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.610 251294 DEBUG nova.compute.provider_tree [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.632 251294 DEBUG nova.scheduler.client.report [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:39:13 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3691005373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.657 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.658 251294 DEBUG nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.705 251294 DEBUG nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.705 251294 DEBUG nova.network.neutron [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.735 251294 INFO nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.758 251294 DEBUG nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.887 251294 DEBUG nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.889 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.889 251294 INFO nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Creating image(s)
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.919 251294 DEBUG nova.storage.rbd_utils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.948 251294 DEBUG nova.storage.rbd_utils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.979 251294 DEBUG nova.storage.rbd_utils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.984 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:39:13 compute-0 nova_compute[251290]: 2026-02-02 11:39:13.998 251294 DEBUG nova.policy [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abee87546a344ef285e2e269d2c74792', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3240aa599bd249a3b72e42fcc47af557', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.045 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.047 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.047 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.048 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.076 251294 DEBUG nova.storage.rbd_utils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.080 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.356 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.435 251294 DEBUG nova.storage.rbd_utils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] resizing rbd image ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 11:39:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.577 251294 DEBUG nova.objects.instance [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'migration_context' on Instance uuid ccf853f9-d90e-46b8-85a2-b47f8fc8585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.602 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.603 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Ensure instance console log exists: /var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.603 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.604 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:14 compute-0 nova_compute[251290]: 2026-02-02 11:39:14.604 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:39:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:39:14 compute-0 ceph-mon[74676]: pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:39:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:39:15 compute-0 nova_compute[251290]: 2026-02-02 11:39:15.158 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:15 compute-0 nova_compute[251290]: 2026-02-02 11:39:15.189 251294 DEBUG nova.network.neutron [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Successfully created port: e235e7e6-e897-4b5c-80c9-036612ca0aa0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 11:39:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:15.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:15.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:15 compute-0 nova_compute[251290]: 2026-02-02 11:39:15.554 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:39:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:16 compute-0 ceph-mon[74676]: pgmap v826: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:39:16 compute-0 nova_compute[251290]: 2026-02-02 11:39:16.908 251294 DEBUG nova.network.neutron [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Successfully updated port: e235e7e6-e897-4b5c-80c9-036612ca0aa0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 11:39:16 compute-0 nova_compute[251290]: 2026-02-02 11:39:16.940 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:39:16 compute-0 nova_compute[251290]: 2026-02-02 11:39:16.941 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquired lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:39:16 compute-0 nova_compute[251290]: 2026-02-02 11:39:16.941 251294 DEBUG nova.network.neutron [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 11:39:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:16] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:39:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:16] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:39:17 compute-0 nova_compute[251290]: 2026-02-02 11:39:17.107 251294 DEBUG nova.compute.manager [req-f865c8ba-0be7-4a0a-8f5d-8639cfe07aec req-d9c0e11e-386f-437d-8ce3-4930d7103255 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-changed-e235e7e6-e897-4b5c-80c9-036612ca0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:39:17 compute-0 nova_compute[251290]: 2026-02-02 11:39:17.107 251294 DEBUG nova.compute.manager [req-f865c8ba-0be7-4a0a-8f5d-8639cfe07aec req-d9c0e11e-386f-437d-8ce3-4930d7103255 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Refreshing instance network info cache due to event network-changed-e235e7e6-e897-4b5c-80c9-036612ca0aa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:39:17 compute-0 nova_compute[251290]: 2026-02-02 11:39:17.108 251294 DEBUG oslo_concurrency.lockutils [req-f865c8ba-0be7-4a0a-8f5d-8639cfe07aec req-d9c0e11e-386f-437d-8ce3-4930d7103255 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:39:17 compute-0 nova_compute[251290]: 2026-02-02 11:39:17.109 251294 DEBUG nova.network.neutron [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 11:39:17 compute-0 sudo[262955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:39:17 compute-0 sudo[262955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:17.142Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:17 compute-0 sudo[262955]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:17 compute-0 sudo[262980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:39:17 compute-0 sudo[262980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:17.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:17.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:17 compute-0 podman[263078]: 2026-02-02 11:39:17.730686478 +0000 UTC m=+0.068275347 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:39:17 compute-0 podman[263078]: 2026-02-02 11:39:17.868089315 +0000 UTC m=+0.205678094 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.103 251294 DEBUG nova.network.neutron [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.124 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Releasing lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.125 251294 DEBUG nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Instance network_info: |[{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.125 251294 DEBUG oslo_concurrency.lockutils [req-f865c8ba-0be7-4a0a-8f5d-8639cfe07aec req-d9c0e11e-386f-437d-8ce3-4930d7103255 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.126 251294 DEBUG nova.network.neutron [req-f865c8ba-0be7-4a0a-8f5d-8639cfe07aec req-d9c0e11e-386f-437d-8ce3-4930d7103255 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Refreshing network info cache for port e235e7e6-e897-4b5c-80c9-036612ca0aa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.129 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Start _get_guest_xml network_info=[{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:33:57Z,direct_url=<?>,disk_format='qcow2',id=8a4b36bd-584f-4a0a-aab3-55c0b12d2d97,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='298a2ae7f4e04d87bebf3a1c7834ef26',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:34:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '8a4b36bd-584f-4a0a-aab3-55c0b12d2d97'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.133 251294 WARNING nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.139 251294 DEBUG nova.virt.libvirt.host [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.140 251294 DEBUG nova.virt.libvirt.host [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.148 251294 DEBUG nova.virt.libvirt.host [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.149 251294 DEBUG nova.virt.libvirt.host [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.150 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.150 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:33:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5413fce8-24ad-46a1-a21e-000a8299c8f6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:33:57Z,direct_url=<?>,disk_format='qcow2',id=8a4b36bd-584f-4a0a-aab3-55c0b12d2d97,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='298a2ae7f4e04d87bebf3a1c7834ef26',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:34:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.151 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.151 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.151 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.151 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.151 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.152 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.152 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.152 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.152 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.153 251294 DEBUG nova.virt.hardware [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.156 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:39:18 compute-0 podman[263232]: 2026-02-02 11:39:18.399940013 +0000 UTC m=+0.050139218 container exec 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:39:18 compute-0 podman[263232]: 2026-02-02 11:39:18.41626298 +0000 UTC m=+0.066462185 container exec_died 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:39:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:39:18 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 11:39:18 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3900783451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:39:18 compute-0 ceph-mon[74676]: pgmap v827: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.649 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:39:18 compute-0 podman[263306]: 2026-02-02 11:39:18.681458849 +0000 UTC m=+0.062841652 container exec d3ddad00564de16556687d2fd72c8c1c0a0fd92bbb83986771142660e53edb15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.681 251294 DEBUG nova.storage.rbd_utils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:39:18 compute-0 nova_compute[251290]: 2026-02-02 11:39:18.686 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:39:18 compute-0 podman[263346]: 2026-02-02 11:39:18.746933425 +0000 UTC m=+0.049916762 container exec_died d3ddad00564de16556687d2fd72c8c1c0a0fd92bbb83986771142660e53edb15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb 02 11:39:18 compute-0 podman[263306]: 2026-02-02 11:39:18.752175265 +0000 UTC m=+0.133558038 container exec_died d3ddad00564de16556687d2fd72c8c1c0a0fd92bbb83986771142660e53edb15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:39:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:18.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:18 compute-0 podman[263409]: 2026-02-02 11:39:18.949525009 +0000 UTC m=+0.052494875 container exec 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:39:18 compute-0 podman[263409]: 2026-02-02 11:39:18.955734677 +0000 UTC m=+0.058704523 container exec_died 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:39:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 11:39:19 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/789502972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.172 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.174 251294 DEBUG nova.virt.libvirt.vif [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:39:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1247938665',display_name='tempest-TestNetworkBasicOps-server-1247938665',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1247938665',id=6,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqof0F9JyUHVT5hMQ79Cok6KP6MdurBGSPOHnHzHbYwvsbknGW+nDBswDzr2outcMRHfbDibxWIVbtW1Oyvmxl+ktSqoqRtSqEKI2u6qUhtldlE4mRfShljqjZtZT75TA==',key_name='tempest-TestNetworkBasicOps-1477889364',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-50sxw0dy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:39:13Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=ccf853f9-d90e-46b8-85a2-b47f8fc8585e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.174 251294 DEBUG nova.network.os_vif_util [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.175 251294 DEBUG nova.network.os_vif_util [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:bd:9e,bridge_name='br-int',has_traffic_filtering=True,id=e235e7e6-e897-4b5c-80c9-036612ca0aa0,network=Network(06a83769-0f4f-4012-9307-4d0e81e87120),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape235e7e6-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.177 251294 DEBUG nova.objects.instance [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'pci_devices' on Instance uuid ccf853f9-d90e-46b8-85a2-b47f8fc8585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.207 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] End _get_guest_xml xml=<domain type="kvm">
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <uuid>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</uuid>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <name>instance-00000006</name>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <memory>131072</memory>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <vcpu>1</vcpu>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <nova:name>tempest-TestNetworkBasicOps-server-1247938665</nova:name>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <nova:creationTime>2026-02-02 11:39:18</nova:creationTime>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <nova:flavor name="m1.nano">
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <nova:memory>128</nova:memory>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <nova:disk>1</nova:disk>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <nova:swap>0</nova:swap>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <nova:vcpus>1</nova:vcpus>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       </nova:flavor>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <nova:owner>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       </nova:owner>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <nova:ports>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <nova:port uuid="e235e7e6-e897-4b5c-80c9-036612ca0aa0">
Feb 02 11:39:19 compute-0 nova_compute[251290]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         </nova:port>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       </nova:ports>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     </nova:instance>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <sysinfo type="smbios">
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <system>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <entry name="manufacturer">RDO</entry>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <entry name="product">OpenStack Compute</entry>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <entry name="serial">ccf853f9-d90e-46b8-85a2-b47f8fc8585e</entry>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <entry name="uuid">ccf853f9-d90e-46b8-85a2-b47f8fc8585e</entry>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <entry name="family">Virtual Machine</entry>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     </system>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <os>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <boot dev="hd"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <smbios mode="sysinfo"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   </os>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <features>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <vmcoreinfo/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   </features>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <clock offset="utc">
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <timer name="hpet" present="no"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <cpu mode="host-model" match="exact">
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <disk type="network" device="disk">
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <driver type="raw" cache="none"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <source protocol="rbd" name="vms/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk">
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <host name="192.168.122.100" port="6789"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <host name="192.168.122.102" port="6789"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <host name="192.168.122.101" port="6789"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       </source>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <auth username="openstack">
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <secret type="ceph" uuid="1d33f80b-d6ca-501c-bac7-184379b89279"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <target dev="vda" bus="virtio"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <disk type="network" device="cdrom">
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <driver type="raw" cache="none"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <source protocol="rbd" name="vms/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk.config">
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <host name="192.168.122.100" port="6789"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <host name="192.168.122.102" port="6789"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <host name="192.168.122.101" port="6789"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       </source>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <auth username="openstack">
Feb 02 11:39:19 compute-0 nova_compute[251290]:         <secret type="ceph" uuid="1d33f80b-d6ca-501c-bac7-184379b89279"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <target dev="sda" bus="sata"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <interface type="ethernet">
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <mac address="fa:16:3e:83:bd:9e"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <model type="virtio"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <mtu size="1442"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <target dev="tape235e7e6-e8"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <serial type="pty">
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <log file="/var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/console.log" append="off"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <video>
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <model type="virtio"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     </video>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <input type="tablet" bus="usb"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <rng model="virtio">
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <backend model="random">/dev/urandom</backend>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <controller type="usb" index="0"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     <memballoon model="virtio">
Feb 02 11:39:19 compute-0 nova_compute[251290]:       <stats period="10"/>
Feb 02 11:39:19 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:39:19 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:39:19 compute-0 nova_compute[251290]: </domain>
Feb 02 11:39:19 compute-0 nova_compute[251290]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.208 251294 DEBUG nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Preparing to wait for external event network-vif-plugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.209 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.210 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.210 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.211 251294 DEBUG nova.virt.libvirt.vif [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:39:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1247938665',display_name='tempest-TestNetworkBasicOps-server-1247938665',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1247938665',id=6,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqof0F9JyUHVT5hMQ79Cok6KP6MdurBGSPOHnHzHbYwvsbknGW+nDBswDzr2outcMRHfbDibxWIVbtW1Oyvmxl+ktSqoqRtSqEKI2u6qUhtldlE4mRfShljqjZtZT75TA==',key_name='tempest-TestNetworkBasicOps-1477889364',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-50sxw0dy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:39:13Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=ccf853f9-d90e-46b8-85a2-b47f8fc8585e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.211 251294 DEBUG nova.network.os_vif_util [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.212 251294 DEBUG nova.network.os_vif_util [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:bd:9e,bridge_name='br-int',has_traffic_filtering=True,id=e235e7e6-e897-4b5c-80c9-036612ca0aa0,network=Network(06a83769-0f4f-4012-9307-4d0e81e87120),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape235e7e6-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.212 251294 DEBUG os_vif [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:bd:9e,bridge_name='br-int',has_traffic_filtering=True,id=e235e7e6-e897-4b5c-80c9-036612ca0aa0,network=Network(06a83769-0f4f-4012-9307-4d0e81e87120),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape235e7e6-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.213 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.214 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.214 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:39:19 compute-0 podman[263478]: 2026-02-02 11:39:19.216838638 +0000 UTC m=+0.116862379 container exec 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2023-02-22T09:23:20, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, description=keepalived for Ceph, io.buildah.version=1.28.2, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vcs-type=git)
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.219 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.219 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape235e7e6-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.220 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape235e7e6-e8, col_values=(('external_ids', {'iface-id': 'e235e7e6-e897-4b5c-80c9-036612ca0aa0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:bd:9e', 'vm-uuid': 'ccf853f9-d90e-46b8-85a2-b47f8fc8585e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.222 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:19 compute-0 NetworkManager[49067]: <info>  [1770032359.2236] manager: (tape235e7e6-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.226 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.229 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.230 251294 INFO os_vif [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:bd:9e,bridge_name='br-int',has_traffic_filtering=True,id=e235e7e6-e897-4b5c-80c9-036612ca0aa0,network=Network(06a83769-0f4f-4012-9307-4d0e81e87120),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape235e7e6-e8')
Feb 02 11:39:19 compute-0 podman[263478]: 2026-02-02 11:39:19.256281888 +0000 UTC m=+0.156305619 container exec_died 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=keepalived for Ceph, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, version=2.2.4, release=1793, com.redhat.component=keepalived-container)
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.301 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.301 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.301 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No VIF found with MAC fa:16:3e:83:bd:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.302 251294 INFO nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Using config drive
Feb 02 11:39:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:39:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:19.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.335 251294 DEBUG nova.storage.rbd_utils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:39:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:19.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:19 compute-0 podman[263566]: 2026-02-02 11:39:19.46577936 +0000 UTC m=+0.054070400 container exec ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:39:19 compute-0 podman[263566]: 2026-02-02 11:39:19.489969573 +0000 UTC m=+0.078260603 container exec_died ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:39:19 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3900783451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:39:19 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/789502972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:39:19 compute-0 podman[263642]: 2026-02-02 11:39:19.684648551 +0000 UTC m=+0.052652179 container exec 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.776 251294 INFO nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Creating config drive at /var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/disk.config
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.780 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpct1ir72f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.835 251294 DEBUG nova.network.neutron [req-f865c8ba-0be7-4a0a-8f5d-8639cfe07aec req-d9c0e11e-386f-437d-8ce3-4930d7103255 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updated VIF entry in instance network info cache for port e235e7e6-e897-4b5c-80c9-036612ca0aa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.836 251294 DEBUG nova.network.neutron [req-f865c8ba-0be7-4a0a-8f5d-8639cfe07aec req-d9c0e11e-386f-437d-8ce3-4930d7103255 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.882 251294 DEBUG oslo_concurrency.lockutils [req-f865c8ba-0be7-4a0a-8f5d-8639cfe07aec req-d9c0e11e-386f-437d-8ce3-4930d7103255 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:39:19 compute-0 podman[263642]: 2026-02-02 11:39:19.900391923 +0000 UTC m=+0.268395521 container exec_died 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.907 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpct1ir72f" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.961 251294 DEBUG nova.storage.rbd_utils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:39:19 compute-0 nova_compute[251290]: 2026-02-02 11:39:19.966 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/disk.config ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:39:20 compute-0 nova_compute[251290]: 2026-02-02 11:39:20.530 251294 DEBUG oslo_concurrency.processutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/disk.config ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:39:20 compute-0 nova_compute[251290]: 2026-02-02 11:39:20.531 251294 INFO nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Deleting local config drive /var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/disk.config because it was imported into RBD.
Feb 02 11:39:20 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 02 11:39:20 compute-0 nova_compute[251290]: 2026-02-02 11:39:20.556 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:39:20 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 02 11:39:20 compute-0 kernel: tape235e7e6-e8: entered promiscuous mode
Feb 02 11:39:20 compute-0 NetworkManager[49067]: <info>  [1770032360.6232] manager: (tape235e7e6-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Feb 02 11:39:20 compute-0 nova_compute[251290]: 2026-02-02 11:39:20.624 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:20 compute-0 ovn_controller[154901]: 2026-02-02T11:39:20Z|00058|binding|INFO|Claiming lport e235e7e6-e897-4b5c-80c9-036612ca0aa0 for this chassis.
Feb 02 11:39:20 compute-0 ovn_controller[154901]: 2026-02-02T11:39:20Z|00059|binding|INFO|e235e7e6-e897-4b5c-80c9-036612ca0aa0: Claiming fa:16:3e:83:bd:9e 10.100.0.4
Feb 02 11:39:20 compute-0 nova_compute[251290]: 2026-02-02 11:39:20.628 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:20 compute-0 nova_compute[251290]: 2026-02-02 11:39:20.631 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.639 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:bd:9e 10.100.0.4'], port_security=['fa:16:3e:83:bd:9e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ccf853f9-d90e-46b8-85a2-b47f8fc8585e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06a83769-0f4f-4012-9307-4d0e81e87120', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e5471f9-85cc-4467-a88c-e46226a3955b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77029122-a2df-4506-bb27-dd42ac356ba6, chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=e235e7e6-e897-4b5c-80c9-036612ca0aa0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.640 165304 INFO neutron.agent.ovn.metadata.agent [-] Port e235e7e6-e897-4b5c-80c9-036612ca0aa0 in datapath 06a83769-0f4f-4012-9307-4d0e81e87120 bound to our chassis
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.642 165304 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 06a83769-0f4f-4012-9307-4d0e81e87120
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.653 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[a49a0993-6c0b-48bd-8120-afd10e504ad9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.654 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap06a83769-01 in ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 11:39:20 compute-0 systemd-machined[218018]: New machine qemu-3-instance-00000006.
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.656 258380 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap06a83769-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.656 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef6dc56-787a-41e9-9b22-3a242210ffe8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 nova_compute[251290]: 2026-02-02 11:39:20.657 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.658 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[d2560891-5cc0-4928-90da-10b4914ee0f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000006.
Feb 02 11:39:20 compute-0 ovn_controller[154901]: 2026-02-02T11:39:20Z|00060|binding|INFO|Setting lport e235e7e6-e897-4b5c-80c9-036612ca0aa0 ovn-installed in OVS
Feb 02 11:39:20 compute-0 ovn_controller[154901]: 2026-02-02T11:39:20Z|00061|binding|INFO|Setting lport e235e7e6-e897-4b5c-80c9-036612ca0aa0 up in Southbound
Feb 02 11:39:20 compute-0 nova_compute[251290]: 2026-02-02 11:39:20.661 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:20 compute-0 ceph-mon[74676]: pgmap v828: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.671 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb05ebe-a71c-4f66-a4d4-0d0c662b0f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 systemd-udevd[263843]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.690 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[7d189a5a-5129-4722-9488-8e4900910c72]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 NetworkManager[49067]: <info>  [1770032360.7001] device (tape235e7e6-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 11:39:20 compute-0 NetworkManager[49067]: <info>  [1770032360.7010] device (tape235e7e6-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 11:39:20 compute-0 podman[263818]: 2026-02-02 11:39:20.706083445 +0000 UTC m=+0.087927870 container exec 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.722 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[3e30b765-ae97-43ff-baca-8703f0edf133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.727 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[1126173f-aa1b-4f3c-b435-c95803ce82d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 systemd-udevd[263851]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:39:20 compute-0 NetworkManager[49067]: <info>  [1770032360.7286] manager: (tap06a83769-00): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Feb 02 11:39:20 compute-0 podman[263818]: 2026-02-02 11:39:20.744436914 +0000 UTC m=+0.126281319 container exec_died 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.762 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[53b74cfb-e1c9-4e26-a27a-9eaa7ffb59a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.766 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[f2931748-414a-4fee-977b-3168d9b01ef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 NetworkManager[49067]: <info>  [1770032360.7916] device (tap06a83769-00): carrier: link connected
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.796 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[932b3e6b-5e24-4bdd-8344-9566a0f932bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.816 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d31002-e905-44cf-ad51-90417990975a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06a83769-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:a2:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401505, 'reachable_time': 18850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263896, 'error': None, 'target': 'ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 sudo[262980]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.835 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[f4935413-34af-491a-afa5-533a08a7a0ca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:a2b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 401505, 'tstamp': 401505}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263897, 'error': None, 'target': 'ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:39:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.854 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca69ad9-6dfa-4981-b942-655ce218c38e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06a83769-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:a2:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401505, 'reachable_time': 18850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263898, 'error': None, 'target': 'ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:39:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.890 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[dd8b931d-fbe5-43ba-a1be-8cae2f0608e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 sudo[263901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:39:20 compute-0 sudo[263901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:20 compute-0 sudo[263901]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.948 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab58a25-d481-4b69-a7c7-aebf763f0a76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.950 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06a83769-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.950 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.951 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06a83769-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:20 compute-0 kernel: tap06a83769-00: entered promiscuous mode
Feb 02 11:39:20 compute-0 NetworkManager[49067]: <info>  [1770032360.9535] manager: (tap06a83769-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.955 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap06a83769-00, col_values=(('external_ids', {'iface-id': '1b19eaf6-869f-4563-a74e-b4aff65ccdab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:20 compute-0 ovn_controller[154901]: 2026-02-02T11:39:20Z|00062|binding|INFO|Releasing lport 1b19eaf6-869f-4563-a74e-b4aff65ccdab from this chassis (sb_readonly=0)
Feb 02 11:39:20 compute-0 nova_compute[251290]: 2026-02-02 11:39:20.963 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.964 165304 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/06a83769-0f4f-4012-9307-4d0e81e87120.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/06a83769-0f4f-4012-9307-4d0e81e87120.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.965 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[8581ea96-4421-4346-bf32-f20a29e5d7d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.966 165304 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: global
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     log         /dev/log local0 debug
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     log-tag     haproxy-metadata-proxy-06a83769-0f4f-4012-9307-4d0e81e87120
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     user        root
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     group       root
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     maxconn     1024
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     pidfile     /var/lib/neutron/external/pids/06a83769-0f4f-4012-9307-4d0e81e87120.pid.haproxy
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     daemon
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: defaults
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     log global
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     mode http
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     option httplog
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     option dontlognull
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     option http-server-close
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     option forwardfor
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     retries                 3
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     timeout http-request    30s
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     timeout connect         30s
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     timeout client          32s
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     timeout server          32s
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     timeout http-keep-alive 30s
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: listen listener
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     bind 169.254.169.254:80
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:     http-request add-header X-OVN-Network-ID 06a83769-0f4f-4012-9307-4d0e81e87120
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 11:39:20 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:20.967 165304 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120', 'env', 'PROCESS_TAG=haproxy-06a83769-0f4f-4012-9307-4d0e81e87120', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/06a83769-0f4f-4012-9307-4d0e81e87120.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 11:39:20 compute-0 sudo[263931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:39:20 compute-0 sudo[263931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.021 251294 DEBUG nova.compute.manager [req-d79a15ba-104f-4cc7-b995-389731472b7d req-e28c14eb-eb61-4df9-a286-da60c0d12545 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-plugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.022 251294 DEBUG oslo_concurrency.lockutils [req-d79a15ba-104f-4cc7-b995-389731472b7d req-e28c14eb-eb61-4df9-a286-da60c0d12545 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.022 251294 DEBUG oslo_concurrency.lockutils [req-d79a15ba-104f-4cc7-b995-389731472b7d req-e28c14eb-eb61-4df9-a286-da60c0d12545 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.022 251294 DEBUG oslo_concurrency.lockutils [req-d79a15ba-104f-4cc7-b995-389731472b7d req-e28c14eb-eb61-4df9-a286-da60c0d12545 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.023 251294 DEBUG nova.compute.manager [req-d79a15ba-104f-4cc7-b995-389731472b7d req-e28c14eb-eb61-4df9-a286-da60c0d12545 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Processing event network-vif-plugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 11:39:21 compute-0 sshd-session[263821]: Received disconnect from 45.148.10.151 port 62006:11:  [preauth]
Feb 02 11:39:21 compute-0 sshd-session[263821]: Disconnected from authenticating user root 45.148.10.151 port 62006 [preauth]
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.294 251294 DEBUG nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.295 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032361.293695, ccf853f9-d90e-46b8-85a2-b47f8fc8585e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.295 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] VM Started (Lifecycle Event)
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.308 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.313 251294 INFO nova.virt.libvirt.driver [-] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Instance spawned successfully.
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.315 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.317 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.322 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 11:39:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000057s ======
Feb 02 11:39:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:21.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.342 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.342 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.343 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.343 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.343 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.344 251294 DEBUG nova.virt.libvirt.driver [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.348 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.349 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032361.2940152, ccf853f9-d90e-46b8-85a2-b47f8fc8585e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.349 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] VM Paused (Lifecycle Event)
Feb 02 11:39:21 compute-0 podman[264037]: 2026-02-02 11:39:21.358051945 +0000 UTC m=+0.061656278 container create e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 11:39:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.381 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.386 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032361.3013058, ccf853f9-d90e-46b8-85a2-b47f8fc8585e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.387 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] VM Resumed (Lifecycle Event)
Feb 02 11:39:21 compute-0 systemd[1]: Started libpod-conmon-e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b.scope.
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.412 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.419 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.422 251294 INFO nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Took 7.53 seconds to spawn the instance on the hypervisor.
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.422 251294 DEBUG nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:39:21 compute-0 podman[264037]: 2026-02-02 11:39:21.330284239 +0000 UTC m=+0.033888592 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 11:39:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83a4aa2ba93104cdbd87356a948bf9c2e1fa99630ac9195c54764439212a589/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:21 compute-0 sudo[263931]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:21 compute-0 podman[264037]: 2026-02-02 11:39:21.454815357 +0000 UTC m=+0.158419720 container init e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.455 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 11:39:21 compute-0 podman[264037]: 2026-02-02 11:39:21.45946007 +0000 UTC m=+0.163064403 container start e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:39:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:39:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:39:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:39:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:39:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 MiB/s wr, 36 op/s
Feb 02 11:39:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:39:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:21 compute-0 neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120[264066]: [NOTICE]   (264070) : New worker (264072) forked
Feb 02 11:39:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:39:21 compute-0 neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120[264066]: [NOTICE]   (264070) : Loading success.
Feb 02 11:39:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:39:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.506 251294 INFO nova.compute.manager [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Took 8.55 seconds to build instance.
Feb 02 11:39:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:39:21 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:39:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:39:21 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:39:21 compute-0 nova_compute[251290]: 2026-02-02 11:39:21.522 251294 DEBUG oslo_concurrency.lockutils [None req-748cb357-52df-45ae-a107-9af4ee1e4a58 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:21 compute-0 sudo[264081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:39:21 compute-0 sudo[264081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:21 compute-0 sudo[264081]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:21 compute-0 sudo[264106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:39:21 compute-0 sudo[264106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:39:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:39:21 compute-0 ceph-mon[74676]: pgmap v829: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 MiB/s wr, 36 op/s
Feb 02 11:39:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:39:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:39:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:39:22 compute-0 podman[264175]: 2026-02-02 11:39:22.003469588 +0000 UTC m=+0.037445324 container create 68e00704b2e909ef87de0cb25691ab3489a8ba19d7c9a8d901c7a7d1bcc6bbef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tesla, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:39:22 compute-0 systemd[1]: Started libpod-conmon-68e00704b2e909ef87de0cb25691ab3489a8ba19d7c9a8d901c7a7d1bcc6bbef.scope.
Feb 02 11:39:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:39:22 compute-0 podman[264175]: 2026-02-02 11:39:21.987578842 +0000 UTC m=+0.021554598 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:39:22 compute-0 podman[264175]: 2026-02-02 11:39:22.082941425 +0000 UTC m=+0.116917181 container init 68e00704b2e909ef87de0cb25691ab3489a8ba19d7c9a8d901c7a7d1bcc6bbef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tesla, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:39:22 compute-0 podman[264175]: 2026-02-02 11:39:22.089883214 +0000 UTC m=+0.123858960 container start 68e00704b2e909ef87de0cb25691ab3489a8ba19d7c9a8d901c7a7d1bcc6bbef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:39:22 compute-0 zealous_tesla[264192]: 167 167
Feb 02 11:39:22 compute-0 systemd[1]: libpod-68e00704b2e909ef87de0cb25691ab3489a8ba19d7c9a8d901c7a7d1bcc6bbef.scope: Deactivated successfully.
Feb 02 11:39:22 compute-0 podman[264175]: 2026-02-02 11:39:22.096313348 +0000 UTC m=+0.130289084 container attach 68e00704b2e909ef87de0cb25691ab3489a8ba19d7c9a8d901c7a7d1bcc6bbef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:39:22 compute-0 podman[264175]: 2026-02-02 11:39:22.096971247 +0000 UTC m=+0.130947013 container died 68e00704b2e909ef87de0cb25691ab3489a8ba19d7c9a8d901c7a7d1bcc6bbef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tesla, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c0d956169907873b000c54482a79f26e6c08075faf01fcd9797759061bb9f3f-merged.mount: Deactivated successfully.
Feb 02 11:39:22 compute-0 podman[264175]: 2026-02-02 11:39:22.138780254 +0000 UTC m=+0.172755990 container remove 68e00704b2e909ef87de0cb25691ab3489a8ba19d7c9a8d901c7a7d1bcc6bbef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb 02 11:39:22 compute-0 systemd[1]: libpod-conmon-68e00704b2e909ef87de0cb25691ab3489a8ba19d7c9a8d901c7a7d1bcc6bbef.scope: Deactivated successfully.
Feb 02 11:39:22 compute-0 podman[264216]: 2026-02-02 11:39:22.293572599 +0000 UTC m=+0.045061922 container create 4bb0445c1f9621dc55fb1eaac19b1a14945a1a115589d03c182e9abe7e96bb6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Feb 02 11:39:22 compute-0 systemd[1]: Started libpod-conmon-4bb0445c1f9621dc55fb1eaac19b1a14945a1a115589d03c182e9abe7e96bb6e.scope.
Feb 02 11:39:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:39:22 compute-0 podman[264216]: 2026-02-02 11:39:22.274678078 +0000 UTC m=+0.026167441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0f84467dc0f6b657dd7ba9a6d28bf34761f2dfa2c2ccb698c3138238bf466b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0f84467dc0f6b657dd7ba9a6d28bf34761f2dfa2c2ccb698c3138238bf466b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0f84467dc0f6b657dd7ba9a6d28bf34761f2dfa2c2ccb698c3138238bf466b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0f84467dc0f6b657dd7ba9a6d28bf34761f2dfa2c2ccb698c3138238bf466b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0f84467dc0f6b657dd7ba9a6d28bf34761f2dfa2c2ccb698c3138238bf466b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:22 compute-0 podman[264216]: 2026-02-02 11:39:22.396275012 +0000 UTC m=+0.147764345 container init 4bb0445c1f9621dc55fb1eaac19b1a14945a1a115589d03c182e9abe7e96bb6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:39:22 compute-0 podman[264216]: 2026-02-02 11:39:22.403440497 +0000 UTC m=+0.154929840 container start 4bb0445c1f9621dc55fb1eaac19b1a14945a1a115589d03c182e9abe7e96bb6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb 02 11:39:22 compute-0 podman[264216]: 2026-02-02 11:39:22.407173194 +0000 UTC m=+0.158662557 container attach 4bb0445c1f9621dc55fb1eaac19b1a14945a1a115589d03c182e9abe7e96bb6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_pare, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 11:39:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:22.675 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:22.677 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:22.678 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:22 compute-0 charming_pare[264233]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:39:22 compute-0 charming_pare[264233]: --> All data devices are unavailable
Feb 02 11:39:22 compute-0 systemd[1]: libpod-4bb0445c1f9621dc55fb1eaac19b1a14945a1a115589d03c182e9abe7e96bb6e.scope: Deactivated successfully.
Feb 02 11:39:22 compute-0 conmon[264233]: conmon 4bb0445c1f9621dc55fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bb0445c1f9621dc55fb1eaac19b1a14945a1a115589d03c182e9abe7e96bb6e.scope/container/memory.events
Feb 02 11:39:22 compute-0 podman[264216]: 2026-02-02 11:39:22.754553807 +0000 UTC m=+0.506043140 container died 4bb0445c1f9621dc55fb1eaac19b1a14945a1a115589d03c182e9abe7e96bb6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_pare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb0f84467dc0f6b657dd7ba9a6d28bf34761f2dfa2c2ccb698c3138238bf466b-merged.mount: Deactivated successfully.
Feb 02 11:39:22 compute-0 podman[264216]: 2026-02-02 11:39:22.804090846 +0000 UTC m=+0.555580179 container remove 4bb0445c1f9621dc55fb1eaac19b1a14945a1a115589d03c182e9abe7e96bb6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:39:22 compute-0 systemd[1]: libpod-conmon-4bb0445c1f9621dc55fb1eaac19b1a14945a1a115589d03c182e9abe7e96bb6e.scope: Deactivated successfully.
Feb 02 11:39:22 compute-0 sudo[264106]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:22 compute-0 sudo[264258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:39:22 compute-0 sudo[264258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:22 compute-0 sudo[264258]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:22 compute-0 sudo[264284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:39:22 compute-0 sudo[264284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:23 compute-0 nova_compute[251290]: 2026-02-02 11:39:23.135 251294 DEBUG nova.compute.manager [req-da9f4e3e-b762-49a2-bb4c-4136d32bc8f4 req-4ea60fdc-007b-4f79-b36f-fde78bf3491b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-plugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:39:23 compute-0 nova_compute[251290]: 2026-02-02 11:39:23.137 251294 DEBUG oslo_concurrency.lockutils [req-da9f4e3e-b762-49a2-bb4c-4136d32bc8f4 req-4ea60fdc-007b-4f79-b36f-fde78bf3491b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:23 compute-0 nova_compute[251290]: 2026-02-02 11:39:23.137 251294 DEBUG oslo_concurrency.lockutils [req-da9f4e3e-b762-49a2-bb4c-4136d32bc8f4 req-4ea60fdc-007b-4f79-b36f-fde78bf3491b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:23 compute-0 nova_compute[251290]: 2026-02-02 11:39:23.137 251294 DEBUG oslo_concurrency.lockutils [req-da9f4e3e-b762-49a2-bb4c-4136d32bc8f4 req-4ea60fdc-007b-4f79-b36f-fde78bf3491b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:23 compute-0 nova_compute[251290]: 2026-02-02 11:39:23.137 251294 DEBUG nova.compute.manager [req-da9f4e3e-b762-49a2-bb4c-4136d32bc8f4 req-4ea60fdc-007b-4f79-b36f-fde78bf3491b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] No waiting events found dispatching network-vif-plugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:39:23 compute-0 nova_compute[251290]: 2026-02-02 11:39:23.137 251294 WARNING nova.compute.manager [req-da9f4e3e-b762-49a2-bb4c-4136d32bc8f4 req-4ea60fdc-007b-4f79-b36f-fde78bf3491b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received unexpected event network-vif-plugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 for instance with vm_state active and task_state None.
Feb 02 11:39:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:23.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:23.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:23 compute-0 podman[264349]: 2026-02-02 11:39:23.399421573 +0000 UTC m=+0.037207197 container create 875525e76ea11aa73fd05de2f4beaa6fe3bfe2d03d4fbbb3dc5cd4d047ab34d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 11:39:23 compute-0 systemd[1]: Started libpod-conmon-875525e76ea11aa73fd05de2f4beaa6fe3bfe2d03d4fbbb3dc5cd4d047ab34d4.scope.
Feb 02 11:39:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:39:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 MiB/s wr, 36 op/s
Feb 02 11:39:23 compute-0 podman[264349]: 2026-02-02 11:39:23.473441094 +0000 UTC m=+0.111226738 container init 875525e76ea11aa73fd05de2f4beaa6fe3bfe2d03d4fbbb3dc5cd4d047ab34d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 11:39:23 compute-0 podman[264349]: 2026-02-02 11:39:23.384328021 +0000 UTC m=+0.022113675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:39:23 compute-0 podman[264349]: 2026-02-02 11:39:23.481174026 +0000 UTC m=+0.118959660 container start 875525e76ea11aa73fd05de2f4beaa6fe3bfe2d03d4fbbb3dc5cd4d047ab34d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:39:23 compute-0 bold_feynman[264366]: 167 167
Feb 02 11:39:23 compute-0 podman[264349]: 2026-02-02 11:39:23.487192428 +0000 UTC m=+0.124978142 container attach 875525e76ea11aa73fd05de2f4beaa6fe3bfe2d03d4fbbb3dc5cd4d047ab34d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:39:23 compute-0 systemd[1]: libpod-875525e76ea11aa73fd05de2f4beaa6fe3bfe2d03d4fbbb3dc5cd4d047ab34d4.scope: Deactivated successfully.
Feb 02 11:39:23 compute-0 podman[264349]: 2026-02-02 11:39:23.489948797 +0000 UTC m=+0.127734421 container died 875525e76ea11aa73fd05de2f4beaa6fe3bfe2d03d4fbbb3dc5cd4d047ab34d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 11:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a242a173523f1267fc9321c82cb1188a007ae9b02a609ada028816fce2b77999-merged.mount: Deactivated successfully.
Feb 02 11:39:23 compute-0 podman[264349]: 2026-02-02 11:39:23.534910315 +0000 UTC m=+0.172695939 container remove 875525e76ea11aa73fd05de2f4beaa6fe3bfe2d03d4fbbb3dc5cd4d047ab34d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:39:23 compute-0 systemd[1]: libpod-conmon-875525e76ea11aa73fd05de2f4beaa6fe3bfe2d03d4fbbb3dc5cd4d047ab34d4.scope: Deactivated successfully.
Feb 02 11:39:23 compute-0 podman[264391]: 2026-02-02 11:39:23.670682854 +0000 UTC m=+0.040596084 container create 38d0e5455dfaf150cdc9b98ad7aec8c8d887380fc6c2bf3d16a599fb5b288d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 02 11:39:23 compute-0 systemd[1]: Started libpod-conmon-38d0e5455dfaf150cdc9b98ad7aec8c8d887380fc6c2bf3d16a599fb5b288d8d.scope.
Feb 02 11:39:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37d88826a7808e7ab6436aaf21ea234c7986e1839e8d925fc85ee4af01326d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37d88826a7808e7ab6436aaf21ea234c7986e1839e8d925fc85ee4af01326d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37d88826a7808e7ab6436aaf21ea234c7986e1839e8d925fc85ee4af01326d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37d88826a7808e7ab6436aaf21ea234c7986e1839e8d925fc85ee4af01326d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:23 compute-0 podman[264391]: 2026-02-02 11:39:23.651806243 +0000 UTC m=+0.021719503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:39:23 compute-0 podman[264391]: 2026-02-02 11:39:23.755324829 +0000 UTC m=+0.125238079 container init 38d0e5455dfaf150cdc9b98ad7aec8c8d887380fc6c2bf3d16a599fb5b288d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:39:23 compute-0 podman[264391]: 2026-02-02 11:39:23.760331143 +0000 UTC m=+0.130244373 container start 38d0e5455dfaf150cdc9b98ad7aec8c8d887380fc6c2bf3d16a599fb5b288d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:39:23 compute-0 podman[264391]: 2026-02-02 11:39:23.76443319 +0000 UTC m=+0.134346420 container attach 38d0e5455dfaf150cdc9b98ad7aec8c8d887380fc6c2bf3d16a599fb5b288d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]: {
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:     "1": [
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:         {
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "devices": [
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "/dev/loop3"
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             ],
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "lv_name": "ceph_lv0",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "lv_size": "21470642176",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "name": "ceph_lv0",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "tags": {
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.cluster_name": "ceph",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.crush_device_class": "",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.encrypted": "0",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.osd_id": "1",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.type": "block",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.vdo": "0",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:                 "ceph.with_tpm": "0"
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             },
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "type": "block",
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:             "vg_name": "ceph_vg0"
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:         }
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]:     ]
Feb 02 11:39:24 compute-0 hardcore_hopper[264408]: }
Feb 02 11:39:24 compute-0 systemd[1]: libpod-38d0e5455dfaf150cdc9b98ad7aec8c8d887380fc6c2bf3d16a599fb5b288d8d.scope: Deactivated successfully.
Feb 02 11:39:24 compute-0 podman[264391]: 2026-02-02 11:39:24.101604421 +0000 UTC m=+0.471517681 container died 38d0e5455dfaf150cdc9b98ad7aec8c8d887380fc6c2bf3d16a599fb5b288d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c37d88826a7808e7ab6436aaf21ea234c7986e1839e8d925fc85ee4af01326d1-merged.mount: Deactivated successfully.
Feb 02 11:39:24 compute-0 podman[264391]: 2026-02-02 11:39:24.141410721 +0000 UTC m=+0.511323951 container remove 38d0e5455dfaf150cdc9b98ad7aec8c8d887380fc6c2bf3d16a599fb5b288d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:39:24 compute-0 systemd[1]: libpod-conmon-38d0e5455dfaf150cdc9b98ad7aec8c8d887380fc6c2bf3d16a599fb5b288d8d.scope: Deactivated successfully.
Feb 02 11:39:24 compute-0 sudo[264284]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:24 compute-0 nova_compute[251290]: 2026-02-02 11:39:24.224 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:24 compute-0 sudo[264428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:39:24 compute-0 sudo[264428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:24 compute-0 sudo[264428]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:24 compute-0 sudo[264453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:39:24 compute-0 sudo[264453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:24 compute-0 ceph-mon[74676]: pgmap v830: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 MiB/s wr, 36 op/s
Feb 02 11:39:24 compute-0 podman[264519]: 2026-02-02 11:39:24.693796558 +0000 UTC m=+0.043198989 container create b8cd614b443205818d6600bea57f6eb9c058e680441bea5de7b71fb5ed5bb154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sammet, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:39:24 compute-0 podman[264519]: 2026-02-02 11:39:24.676940685 +0000 UTC m=+0.026343136 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:39:24 compute-0 systemd[1]: Started libpod-conmon-b8cd614b443205818d6600bea57f6eb9c058e680441bea5de7b71fb5ed5bb154.scope.
Feb 02 11:39:24 compute-0 NetworkManager[49067]: <info>  [1770032364.8183] manager: (patch-br-int-to-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Feb 02 11:39:24 compute-0 NetworkManager[49067]: <info>  [1770032364.8192] manager: (patch-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Feb 02 11:39:24 compute-0 ovn_controller[154901]: 2026-02-02T11:39:24Z|00063|binding|INFO|Releasing lport 1b19eaf6-869f-4563-a74e-b4aff65ccdab from this chassis (sb_readonly=0)
Feb 02 11:39:24 compute-0 nova_compute[251290]: 2026-02-02 11:39:24.826 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:24 compute-0 ovn_controller[154901]: 2026-02-02T11:39:24Z|00064|binding|INFO|Releasing lport 1b19eaf6-869f-4563-a74e-b4aff65ccdab from this chassis (sb_readonly=0)
Feb 02 11:39:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:39:24 compute-0 podman[264519]: 2026-02-02 11:39:24.880454786 +0000 UTC m=+0.229857237 container init b8cd614b443205818d6600bea57f6eb9c058e680441bea5de7b71fb5ed5bb154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:39:24 compute-0 podman[264519]: 2026-02-02 11:39:24.887968671 +0000 UTC m=+0.237371102 container start b8cd614b443205818d6600bea57f6eb9c058e680441bea5de7b71fb5ed5bb154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sammet, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:39:24 compute-0 podman[264519]: 2026-02-02 11:39:24.891348338 +0000 UTC m=+0.240750789 container attach b8cd614b443205818d6600bea57f6eb9c058e680441bea5de7b71fb5ed5bb154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:39:24 compute-0 cranky_sammet[264535]: 167 167
Feb 02 11:39:24 compute-0 systemd[1]: libpod-b8cd614b443205818d6600bea57f6eb9c058e680441bea5de7b71fb5ed5bb154.scope: Deactivated successfully.
Feb 02 11:39:24 compute-0 podman[264519]: 2026-02-02 11:39:24.893155229 +0000 UTC m=+0.242557660 container died b8cd614b443205818d6600bea57f6eb9c058e680441bea5de7b71fb5ed5bb154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sammet, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f980f2844f76b6690d73c507318857a0695b0a243b04eb27b8f6d10a875b98-merged.mount: Deactivated successfully.
Feb 02 11:39:24 compute-0 podman[264519]: 2026-02-02 11:39:24.932502667 +0000 UTC m=+0.281905108 container remove b8cd614b443205818d6600bea57f6eb9c058e680441bea5de7b71fb5ed5bb154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sammet, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:39:24 compute-0 systemd[1]: libpod-conmon-b8cd614b443205818d6600bea57f6eb9c058e680441bea5de7b71fb5ed5bb154.scope: Deactivated successfully.
Feb 02 11:39:25 compute-0 podman[264560]: 2026-02-02 11:39:25.073128636 +0000 UTC m=+0.041406507 container create 65c464f8cb76cf8b2b2428e5f25864084ae8e957d108380845c6bc131a084529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_curran, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Feb 02 11:39:25 compute-0 systemd[1]: Started libpod-conmon-65c464f8cb76cf8b2b2428e5f25864084ae8e957d108380845c6bc131a084529.scope.
Feb 02 11:39:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f514da31554498bca01550df05effb0ee6c0592db692f48158b7d415120ec23b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f514da31554498bca01550df05effb0ee6c0592db692f48158b7d415120ec23b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f514da31554498bca01550df05effb0ee6c0592db692f48158b7d415120ec23b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f514da31554498bca01550df05effb0ee6c0592db692f48158b7d415120ec23b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:25 compute-0 podman[264560]: 2026-02-02 11:39:25.055319636 +0000 UTC m=+0.023597527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:39:25 compute-0 podman[264560]: 2026-02-02 11:39:25.170278149 +0000 UTC m=+0.138556040 container init 65c464f8cb76cf8b2b2428e5f25864084ae8e957d108380845c6bc131a084529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_curran, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:39:25 compute-0 podman[264560]: 2026-02-02 11:39:25.177036293 +0000 UTC m=+0.145314164 container start 65c464f8cb76cf8b2b2428e5f25864084ae8e957d108380845c6bc131a084529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_curran, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Feb 02 11:39:25 compute-0 podman[264560]: 2026-02-02 11:39:25.180944065 +0000 UTC m=+0.149221956 container attach 65c464f8cb76cf8b2b2428e5f25864084ae8e957d108380845c6bc131a084529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:39:25 compute-0 nova_compute[251290]: 2026-02-02 11:39:25.250 251294 DEBUG nova.compute.manager [req-187db1ae-e11c-4fbf-8d84-28560d5b785d req-a0bb24a0-9047-48e1-83c1-0f186ba32af6 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-changed-e235e7e6-e897-4b5c-80c9-036612ca0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:39:25 compute-0 nova_compute[251290]: 2026-02-02 11:39:25.252 251294 DEBUG nova.compute.manager [req-187db1ae-e11c-4fbf-8d84-28560d5b785d req-a0bb24a0-9047-48e1-83c1-0f186ba32af6 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Refreshing instance network info cache due to event network-changed-e235e7e6-e897-4b5c-80c9-036612ca0aa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:39:25 compute-0 nova_compute[251290]: 2026-02-02 11:39:25.252 251294 DEBUG oslo_concurrency.lockutils [req-187db1ae-e11c-4fbf-8d84-28560d5b785d req-a0bb24a0-9047-48e1-83c1-0f186ba32af6 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:39:25 compute-0 nova_compute[251290]: 2026-02-02 11:39:25.252 251294 DEBUG oslo_concurrency.lockutils [req-187db1ae-e11c-4fbf-8d84-28560d5b785d req-a0bb24a0-9047-48e1-83c1-0f186ba32af6 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:39:25 compute-0 nova_compute[251290]: 2026-02-02 11:39:25.252 251294 DEBUG nova.network.neutron [req-187db1ae-e11c-4fbf-8d84-28560d5b785d req-a0bb24a0-9047-48e1-83c1-0f186ba32af6 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Refreshing network info cache for port e235e7e6-e897-4b5c-80c9-036612ca0aa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:39:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:25.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:25.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 MiB/s wr, 36 op/s
Feb 02 11:39:25 compute-0 nova_compute[251290]: 2026-02-02 11:39:25.558 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:25 compute-0 lvm[264651]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:39:25 compute-0 lvm[264651]: VG ceph_vg0 finished
Feb 02 11:39:25 compute-0 great_curran[264576]: {}
Feb 02 11:39:25 compute-0 systemd[1]: libpod-65c464f8cb76cf8b2b2428e5f25864084ae8e957d108380845c6bc131a084529.scope: Deactivated successfully.
Feb 02 11:39:25 compute-0 podman[264560]: 2026-02-02 11:39:25.855117781 +0000 UTC m=+0.823395652 container died 65c464f8cb76cf8b2b2428e5f25864084ae8e957d108380845c6bc131a084529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_curran, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:39:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f514da31554498bca01550df05effb0ee6c0592db692f48158b7d415120ec23b-merged.mount: Deactivated successfully.
Feb 02 11:39:25 compute-0 podman[264560]: 2026-02-02 11:39:25.896109726 +0000 UTC m=+0.864387597 container remove 65c464f8cb76cf8b2b2428e5f25864084ae8e957d108380845c6bc131a084529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_curran, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb 02 11:39:25 compute-0 systemd[1]: libpod-conmon-65c464f8cb76cf8b2b2428e5f25864084ae8e957d108380845c6bc131a084529.scope: Deactivated successfully.
Feb 02 11:39:25 compute-0 sudo[264453]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:39:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:39:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:26 compute-0 sudo[264667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:39:26 compute-0 sudo[264667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:26 compute-0 sudo[264667]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:26 compute-0 sudo[264692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:39:26 compute-0 sudo[264692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:26 compute-0 sudo[264692]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:26 compute-0 ceph-mon[74676]: pgmap v831: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 MiB/s wr, 36 op/s
Feb 02 11:39:26 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:26 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:39:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:26 compute-0 nova_compute[251290]: 2026-02-02 11:39:26.716 251294 DEBUG nova.network.neutron [req-187db1ae-e11c-4fbf-8d84-28560d5b785d req-a0bb24a0-9047-48e1-83c1-0f186ba32af6 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updated VIF entry in instance network info cache for port e235e7e6-e897-4b5c-80c9-036612ca0aa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:39:26 compute-0 nova_compute[251290]: 2026-02-02 11:39:26.718 251294 DEBUG nova.network.neutron [req-187db1ae-e11c-4fbf-8d84-28560d5b785d req-a0bb24a0-9047-48e1-83c1-0f186ba32af6 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:39:26 compute-0 nova_compute[251290]: 2026-02-02 11:39:26.735 251294 DEBUG oslo_concurrency.lockutils [req-187db1ae-e11c-4fbf-8d84-28560d5b785d req-a0bb24a0-9047-48e1-83c1-0f186ba32af6 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:39:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:26] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:39:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:26] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:39:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:27.142Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:39:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:27.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:39:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:27.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 82 op/s
Feb 02 11:39:28 compute-0 ceph-mon[74676]: pgmap v832: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 82 op/s
Feb 02 11:39:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:28.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:29 compute-0 nova_compute[251290]: 2026-02-02 11:39:29.228 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:29.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:29.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 82 op/s
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:39:29
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.nfs', 'images', 'backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control']
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:39:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:39:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:39:29 compute-0 ceph-mon[74676]: pgmap v833: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 82 op/s
Feb 02 11:39:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:39:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:39:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:39:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:39:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:39:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:39:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:39:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:39:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:39:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:39:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:39:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:39:30 compute-0 nova_compute[251290]: 2026-02-02 11:39:30.561 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:30 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3857640473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:39:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:31 compute-0 nova_compute[251290]: 2026-02-02 11:39:31.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:31 compute-0 nova_compute[251290]: 2026-02-02 11:39:31.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 11:39:31 compute-0 nova_compute[251290]: 2026-02-02 11:39:31.038 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 11:39:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:31.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:31.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 82 op/s
Feb 02 11:39:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3099576645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:39:31 compute-0 ceph-mon[74676]: pgmap v834: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 82 op/s
Feb 02 11:39:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:32 compute-0 nova_compute[251290]: 2026-02-02 11:39:32.039 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:32 compute-0 nova_compute[251290]: 2026-02-02 11:39:32.039 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2395376450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:39:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:39:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:33.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:39:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:33.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Feb 02 11:39:33 compute-0 ceph-mon[74676]: pgmap v835: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Feb 02 11:39:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4162398653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:39:34 compute-0 nova_compute[251290]: 2026-02-02 11:39:34.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:34 compute-0 nova_compute[251290]: 2026-02-02 11:39:34.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:34 compute-0 nova_compute[251290]: 2026-02-02 11:39:34.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:39:34 compute-0 nova_compute[251290]: 2026-02-02 11:39:34.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:34 compute-0 nova_compute[251290]: 2026-02-02 11:39:34.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 11:39:34 compute-0 nova_compute[251290]: 2026-02-02 11:39:34.231 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:34 compute-0 ovn_controller[154901]: 2026-02-02T11:39:34Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:bd:9e 10.100.0.4
Feb 02 11:39:34 compute-0 ovn_controller[154901]: 2026-02-02T11:39:34Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:bd:9e 10.100.0.4
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.028 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.052 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.053 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.079 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.080 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.080 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.080 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.081 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:39:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:35.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:35.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Feb 02 11:39:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:39:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1096574493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.551 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.563 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:35 compute-0 ceph-mon[74676]: pgmap v836: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Feb 02 11:39:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1096574493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.616 251294 DEBUG nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.617 251294 DEBUG nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.749 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.750 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4320MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.750 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.751 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.871 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Instance ccf853f9-d90e-46b8-85a2-b47f8fc8585e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.872 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.872 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:39:35 compute-0 nova_compute[251290]: 2026-02-02 11:39:35.991 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:39:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:39:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3158376166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:39:36 compute-0 nova_compute[251290]: 2026-02-02 11:39:36.467 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:39:36 compute-0 nova_compute[251290]: 2026-02-02 11:39:36.473 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:39:36 compute-0 nova_compute[251290]: 2026-02-02 11:39:36.491 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:39:36 compute-0 nova_compute[251290]: 2026-02-02 11:39:36.513 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:39:36 compute-0 nova_compute[251290]: 2026-02-02 11:39:36.513 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3158376166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:39:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:36] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Feb 02 11:39:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:36] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Feb 02 11:39:37 compute-0 nova_compute[251290]: 2026-02-02 11:39:37.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:37 compute-0 nova_compute[251290]: 2026-02-02 11:39:37.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:37 compute-0 nova_compute[251290]: 2026-02-02 11:39:37.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:37.144Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:37.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:37.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Feb 02 11:39:37 compute-0 ceph-mon[74676]: pgmap v837: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Feb 02 11:39:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:38.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:39:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:38.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:39 compute-0 nova_compute[251290]: 2026-02-02 11:39:39.235 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:39:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:39.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:39:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:39:40 compute-0 nova_compute[251290]: 2026-02-02 11:39:40.031 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:40 compute-0 nova_compute[251290]: 2026-02-02 11:39:40.032 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:39:40 compute-0 nova_compute[251290]: 2026-02-02 11:39:40.032 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:39:40 compute-0 ceph-mon[74676]: pgmap v838: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:39:40 compute-0 nova_compute[251290]: 2026-02-02 11:39:40.565 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:40 compute-0 nova_compute[251290]: 2026-02-02 11:39:40.779 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:39:40 compute-0 nova_compute[251290]: 2026-02-02 11:39:40.780 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquired lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:39:40 compute-0 nova_compute[251290]: 2026-02-02 11:39:40.780 251294 DEBUG nova.network.neutron [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 11:39:40 compute-0 nova_compute[251290]: 2026-02-02 11:39:40.780 251294 DEBUG nova.objects.instance [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ccf853f9-d90e-46b8-85a2-b47f8fc8585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:39:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:41.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:41.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:39:41 compute-0 nova_compute[251290]: 2026-02-02 11:39:41.563 251294 INFO nova.compute.manager [None req-bf51d4d4-068f-40b2-b53b-a94c0e720d8c abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Get console output
Feb 02 11:39:41 compute-0 nova_compute[251290]: 2026-02-02 11:39:41.569 258588 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 02 11:39:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:42 compute-0 ceph-mon[74676]: pgmap v839: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:39:42 compute-0 nova_compute[251290]: 2026-02-02 11:39:42.689 251294 DEBUG nova.network.neutron [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:39:42 compute-0 nova_compute[251290]: 2026-02-02 11:39:42.709 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Releasing lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:39:42 compute-0 nova_compute[251290]: 2026-02-02 11:39:42.709 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 11:39:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:43.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:39:43 compute-0 ceph-mon[74676]: pgmap v840: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:39:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:39:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/892173503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:39:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:39:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/892173503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:39:44 compute-0 nova_compute[251290]: 2026-02-02 11:39:44.268 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:44 compute-0 podman[264782]: 2026-02-02 11:39:44.29497552 +0000 UTC m=+0.079807478 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 02 11:39:44 compute-0 podman[264783]: 2026-02-02 11:39:44.326653827 +0000 UTC m=+0.110723573 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 11:39:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:39:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:39:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/892173503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:39:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/892173503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:39:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:39:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:45 compute-0 nova_compute[251290]: 2026-02-02 11:39:45.353 251294 DEBUG oslo_concurrency.lockutils [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "interface-ccf853f9-d90e-46b8-85a2-b47f8fc8585e-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:45 compute-0 nova_compute[251290]: 2026-02-02 11:39:45.354 251294 DEBUG oslo_concurrency.lockutils [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "interface-ccf853f9-d90e-46b8-85a2-b47f8fc8585e-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:45 compute-0 nova_compute[251290]: 2026-02-02 11:39:45.354 251294 DEBUG nova.objects.instance [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'flavor' on Instance uuid ccf853f9-d90e-46b8-85a2-b47f8fc8585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:39:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:45.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:39:45 compute-0 nova_compute[251290]: 2026-02-02 11:39:45.568 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:45 compute-0 ceph-mon[74676]: pgmap v841: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:39:45 compute-0 nova_compute[251290]: 2026-02-02 11:39:45.953 251294 DEBUG nova.objects.instance [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'pci_requests' on Instance uuid ccf853f9-d90e-46b8-85a2-b47f8fc8585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:39:45 compute-0 nova_compute[251290]: 2026-02-02 11:39:45.971 251294 DEBUG nova.network.neutron [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 11:39:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:46 compute-0 nova_compute[251290]: 2026-02-02 11:39:46.156 251294 DEBUG nova.policy [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abee87546a344ef285e2e269d2c74792', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3240aa599bd249a3b72e42fcc47af557', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 11:39:46 compute-0 sudo[264828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:39:46 compute-0 sudo[264828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:39:46 compute-0 sudo[264828]: pam_unix(sudo:session): session closed for user root
Feb 02 11:39:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:46] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Feb 02 11:39:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:46] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Feb 02 11:39:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:47.145Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:39:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:47.146Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:47.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:47.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:39:48 compute-0 nova_compute[251290]: 2026-02-02 11:39:48.175 251294 DEBUG nova.network.neutron [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Successfully created port: fb58aed2-3c97-4c85-8834-01bd422b3fd4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 11:39:48 compute-0 ceph-mon[74676]: pgmap v842: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:39:48 compute-0 nova_compute[251290]: 2026-02-02 11:39:48.783 251294 DEBUG nova.network.neutron [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Successfully updated port: fb58aed2-3c97-4c85-8834-01bd422b3fd4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 11:39:48 compute-0 nova_compute[251290]: 2026-02-02 11:39:48.797 251294 DEBUG oslo_concurrency.lockutils [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:39:48 compute-0 nova_compute[251290]: 2026-02-02 11:39:48.797 251294 DEBUG oslo_concurrency.lockutils [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquired lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:39:48 compute-0 nova_compute[251290]: 2026-02-02 11:39:48.798 251294 DEBUG nova.network.neutron [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 11:39:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:48.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:48 compute-0 nova_compute[251290]: 2026-02-02 11:39:48.892 251294 DEBUG nova.compute.manager [req-e0f6d048-cf04-4e56-817f-3279d7dcc477 req-e8ba9063-2668-4a0d-b082-e80a510091cc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-changed-fb58aed2-3c97-4c85-8834-01bd422b3fd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:39:48 compute-0 nova_compute[251290]: 2026-02-02 11:39:48.892 251294 DEBUG nova.compute.manager [req-e0f6d048-cf04-4e56-817f-3279d7dcc477 req-e8ba9063-2668-4a0d-b082-e80a510091cc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Refreshing instance network info cache due to event network-changed-fb58aed2-3c97-4c85-8834-01bd422b3fd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:39:48 compute-0 nova_compute[251290]: 2026-02-02 11:39:48.893 251294 DEBUG oslo_concurrency.lockutils [req-e0f6d048-cf04-4e56-817f-3279d7dcc477 req-e8ba9063-2668-4a0d-b082-e80a510091cc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:39:49 compute-0 nova_compute[251290]: 2026-02-02 11:39:49.271 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:49.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb 02 11:39:49 compute-0 nova_compute[251290]: 2026-02-02 11:39:49.785 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:39:49 compute-0 nova_compute[251290]: 2026-02-02 11:39:49.807 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Triggering sync for uuid ccf853f9-d90e-46b8-85a2-b47f8fc8585e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 02 11:39:49 compute-0 nova_compute[251290]: 2026-02-02 11:39:49.808 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:49 compute-0 nova_compute[251290]: 2026-02-02 11:39:49.808 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:49 compute-0 nova_compute[251290]: 2026-02-02 11:39:49.842 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:50 compute-0 ceph-mon[74676]: pgmap v843: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb 02 11:39:50 compute-0 nova_compute[251290]: 2026-02-02 11:39:50.570 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:51.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:51.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 17 KiB/s wr, 2 op/s
Feb 02 11:39:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:52 compute-0 ceph-mon[74676]: pgmap v844: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 17 KiB/s wr, 2 op/s
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.807 251294 DEBUG nova.network.neutron [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.837 251294 DEBUG oslo_concurrency.lockutils [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Releasing lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.840 251294 DEBUG oslo_concurrency.lockutils [req-e0f6d048-cf04-4e56-817f-3279d7dcc477 req-e8ba9063-2668-4a0d-b082-e80a510091cc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.840 251294 DEBUG nova.network.neutron [req-e0f6d048-cf04-4e56-817f-3279d7dcc477 req-e8ba9063-2668-4a0d-b082-e80a510091cc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Refreshing network info cache for port fb58aed2-3c97-4c85-8834-01bd422b3fd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.844 251294 DEBUG nova.virt.libvirt.vif [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:39:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1247938665',display_name='tempest-TestNetworkBasicOps-server-1247938665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1247938665',id=6,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqof0F9JyUHVT5hMQ79Cok6KP6MdurBGSPOHnHzHbYwvsbknGW+nDBswDzr2outcMRHfbDibxWIVbtW1Oyvmxl+ktSqoqRtSqEKI2u6qUhtldlE4mRfShljqjZtZT75TA==',key_name='tempest-TestNetworkBasicOps-1477889364',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-50sxw0dy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:39:21Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=ccf853f9-d90e-46b8-85a2-b47f8fc8585e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.845 251294 DEBUG nova.network.os_vif_util [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.846 251294 DEBUG nova.network.os_vif_util [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.847 251294 DEBUG os_vif [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.847 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.848 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.848 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.852 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.853 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb58aed2-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.853 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb58aed2-3c, col_values=(('external_ids', {'iface-id': 'fb58aed2-3c97-4c85-8834-01bd422b3fd4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:43:b5', 'vm-uuid': 'ccf853f9-d90e-46b8-85a2-b47f8fc8585e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.855 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:52 compute-0 NetworkManager[49067]: <info>  [1770032392.8565] manager: (tapfb58aed2-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.861 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.864 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.865 251294 INFO os_vif [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c')
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.866 251294 DEBUG nova.virt.libvirt.vif [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:39:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1247938665',display_name='tempest-TestNetworkBasicOps-server-1247938665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1247938665',id=6,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqof0F9JyUHVT5hMQ79Cok6KP6MdurBGSPOHnHzHbYwvsbknGW+nDBswDzr2outcMRHfbDibxWIVbtW1Oyvmxl+ktSqoqRtSqEKI2u6qUhtldlE4mRfShljqjZtZT75TA==',key_name='tempest-TestNetworkBasicOps-1477889364',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-50sxw0dy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:39:21Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=ccf853f9-d90e-46b8-85a2-b47f8fc8585e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.866 251294 DEBUG nova.network.os_vif_util [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.867 251294 DEBUG nova.network.os_vif_util [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.870 251294 DEBUG nova.virt.libvirt.guest [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] attach device xml: <interface type="ethernet">
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <mac address="fa:16:3e:4a:43:b5"/>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <model type="virtio"/>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <mtu size="1442"/>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <target dev="tapfb58aed2-3c"/>
Feb 02 11:39:52 compute-0 nova_compute[251290]: </interface>
Feb 02 11:39:52 compute-0 nova_compute[251290]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 11:39:52 compute-0 kernel: tapfb58aed2-3c: entered promiscuous mode
Feb 02 11:39:52 compute-0 NetworkManager[49067]: <info>  [1770032392.8841] manager: (tapfb58aed2-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Feb 02 11:39:52 compute-0 ovn_controller[154901]: 2026-02-02T11:39:52Z|00065|binding|INFO|Claiming lport fb58aed2-3c97-4c85-8834-01bd422b3fd4 for this chassis.
Feb 02 11:39:52 compute-0 ovn_controller[154901]: 2026-02-02T11:39:52Z|00066|binding|INFO|fb58aed2-3c97-4c85-8834-01bd422b3fd4: Claiming fa:16:3e:4a:43:b5 10.100.0.29
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.884 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.895 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:43:b5 10.100.0.29'], port_security=['fa:16:3e:4a:43:b5 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': 'ccf853f9-d90e-46b8-85a2-b47f8fc8585e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed74a638-b21d-4bea-a72e-473ea537cd95', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b1e05b1-1f2d-464d-ab65-bb650bbe0f35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ed41d58-bb56-40d9-abe4-e24417089c0a, chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=fb58aed2-3c97-4c85-8834-01bd422b3fd4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.897 165304 INFO neutron.agent.ovn.metadata.agent [-] Port fb58aed2-3c97-4c85-8834-01bd422b3fd4 in datapath ed74a638-b21d-4bea-a72e-473ea537cd95 bound to our chassis
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.898 165304 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed74a638-b21d-4bea-a72e-473ea537cd95
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.905 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:52 compute-0 ovn_controller[154901]: 2026-02-02T11:39:52Z|00067|binding|INFO|Setting lport fb58aed2-3c97-4c85-8834-01bd422b3fd4 ovn-installed in OVS
Feb 02 11:39:52 compute-0 ovn_controller[154901]: 2026-02-02T11:39:52Z|00068|binding|INFO|Setting lport fb58aed2-3c97-4c85-8834-01bd422b3fd4 up in Southbound
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.909 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.912 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[d46d25dc-e8fe-4bbc-ba01-603344797f84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.913 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped74a638-b1 in ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 11:39:52 compute-0 systemd-udevd[264867]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.917 258380 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped74a638-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.917 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[ace50840-d4d2-497b-a436-030cf8abc3be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.921 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[673e3253-1d40-4239-820f-f43f8000ea7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:52 compute-0 NetworkManager[49067]: <info>  [1770032392.9285] device (tapfb58aed2-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 11:39:52 compute-0 NetworkManager[49067]: <info>  [1770032392.9293] device (tapfb58aed2-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.936 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ae7a57-652d-4e81-aced-af24c9beb314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.952 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[2609b82d-d988-4157-8647-3f290ed68675]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.963 251294 DEBUG nova.virt.libvirt.driver [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.965 251294 DEBUG nova.virt.libvirt.driver [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.966 251294 DEBUG nova.virt.libvirt.driver [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No VIF found with MAC fa:16:3e:83:bd:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.966 251294 DEBUG nova.virt.libvirt.driver [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No VIF found with MAC fa:16:3e:4a:43:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.978 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[43851342-cc8a-42ee-940b-e21e6772e8f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:52 compute-0 NetworkManager[49067]: <info>  [1770032392.9847] manager: (taped74a638-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Feb 02 11:39:52 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:52.983 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[e6dbdf29-8b03-4865-b4fb-bdac901eec09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:52 compute-0 nova_compute[251290]: 2026-02-02 11:39:52.994 251294 DEBUG nova.virt.libvirt.guest [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-1247938665</nova:name>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:39:52</nova:creationTime>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:39:52 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:39:52 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:39:52 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:39:52 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:39:52 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:39:52 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:39:52 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:39:52 compute-0 nova_compute[251290]:     <nova:port uuid="e235e7e6-e897-4b5c-80c9-036612ca0aa0">
Feb 02 11:39:52 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 11:39:52 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:39:52 compute-0 nova_compute[251290]:     <nova:port uuid="fb58aed2-3c97-4c85-8834-01bd422b3fd4">
Feb 02 11:39:52 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 02 11:39:52 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:39:52 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:39:52 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:39:52 compute-0 nova_compute[251290]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.013 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[03606979-710b-4701-b446-1f6f5dd0bd0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.017 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[e478a44c-2dac-4d3d-a539-12eb5bffb920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.023 251294 DEBUG oslo_concurrency.lockutils [None req-a01f5a16-2c6f-4186-b4fa-4b95ff93c05e abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "interface-ccf853f9-d90e-46b8-85a2-b47f8fc8585e-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:53 compute-0 NetworkManager[49067]: <info>  [1770032393.0395] device (taped74a638-b0): carrier: link connected
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.044 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[917f2a32-afea-4155-95b8-1812b860e7eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.059 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[1156b4d9-3c73-4ec7-bbc7-d85960e12b56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped74a638-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:84:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 404730, 'reachable_time': 19252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264893, 'error': None, 'target': 'ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.074 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[63dd2b0c-0c70-4ad9-8e67-b932fc61319b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:8427'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 404730, 'tstamp': 404730}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264894, 'error': None, 'target': 'ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.087 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[81b46641-bff0-4014-956d-255baf39e9b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped74a638-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:84:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 404730, 'reachable_time': 19252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264895, 'error': None, 'target': 'ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.116 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[2620f0f6-1878-42d9-aec9-3401d02daef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.152 251294 DEBUG nova.compute.manager [req-5b065cb2-9370-43cc-a359-275ce25edf75 req-731c0c8e-ebca-4864-ba73-7ab35a740a2a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-plugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.152 251294 DEBUG oslo_concurrency.lockutils [req-5b065cb2-9370-43cc-a359-275ce25edf75 req-731c0c8e-ebca-4864-ba73-7ab35a740a2a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.153 251294 DEBUG oslo_concurrency.lockutils [req-5b065cb2-9370-43cc-a359-275ce25edf75 req-731c0c8e-ebca-4864-ba73-7ab35a740a2a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.153 251294 DEBUG oslo_concurrency.lockutils [req-5b065cb2-9370-43cc-a359-275ce25edf75 req-731c0c8e-ebca-4864-ba73-7ab35a740a2a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.153 251294 DEBUG nova.compute.manager [req-5b065cb2-9370-43cc-a359-275ce25edf75 req-731c0c8e-ebca-4864-ba73-7ab35a740a2a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] No waiting events found dispatching network-vif-plugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.153 251294 WARNING nova.compute.manager [req-5b065cb2-9370-43cc-a359-275ce25edf75 req-731c0c8e-ebca-4864-ba73-7ab35a740a2a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received unexpected event network-vif-plugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 for instance with vm_state active and task_state None.
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.178 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[19161374-3f3c-400c-be88-412d10f69ccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.181 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped74a638-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.181 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.181 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped74a638-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:53 compute-0 NetworkManager[49067]: <info>  [1770032393.1844] manager: (taped74a638-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Feb 02 11:39:53 compute-0 kernel: taped74a638-b0: entered promiscuous mode
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.184 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.187 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped74a638-b0, col_values=(('external_ids', {'iface-id': '7a5073b8-7726-4539-a33e-e70050472f3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.188 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.189 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:53 compute-0 ovn_controller[154901]: 2026-02-02T11:39:53Z|00069|binding|INFO|Releasing lport 7a5073b8-7726-4539-a33e-e70050472f3d from this chassis (sb_readonly=0)
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.193 165304 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed74a638-b21d-4bea-a72e-473ea537cd95.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed74a638-b21d-4bea-a72e-473ea537cd95.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 11:39:53 compute-0 nova_compute[251290]: 2026-02-02 11:39:53.194 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.194 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[62dcbfc8-9bdd-435e-b077-84a8ac7cb389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.196 165304 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: global
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     log         /dev/log local0 debug
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     log-tag     haproxy-metadata-proxy-ed74a638-b21d-4bea-a72e-473ea537cd95
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     user        root
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     group       root
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     maxconn     1024
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     pidfile     /var/lib/neutron/external/pids/ed74a638-b21d-4bea-a72e-473ea537cd95.pid.haproxy
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     daemon
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: defaults
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     log global
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     mode http
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     option httplog
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     option dontlognull
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     option http-server-close
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     option forwardfor
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     retries                 3
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     timeout http-request    30s
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     timeout connect         30s
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     timeout client          32s
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     timeout server          32s
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     timeout http-keep-alive 30s
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: listen listener
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     bind 169.254.169.254:80
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:     http-request add-header X-OVN-Network-ID ed74a638-b21d-4bea-a72e-473ea537cd95
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 11:39:53 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:53.197 165304 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95', 'env', 'PROCESS_TAG=haproxy-ed74a638-b21d-4bea-a72e-473ea537cd95', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed74a638-b21d-4bea-a72e-473ea537cd95.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 11:39:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:39:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:53.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:39:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:53.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Feb 02 11:39:53 compute-0 podman[264927]: 2026-02-02 11:39:53.57151105 +0000 UTC m=+0.050235170 container create e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 02 11:39:53 compute-0 ceph-mon[74676]: pgmap v845: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Feb 02 11:39:53 compute-0 systemd[1]: Started libpod-conmon-e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10.scope.
Feb 02 11:39:53 compute-0 podman[264927]: 2026-02-02 11:39:53.54706881 +0000 UTC m=+0.025792950 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 11:39:53 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565c07b3561e09f46255e9631bc3c45c8a977aeee43f3fe4690ccab23c9a582c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 11:39:53 compute-0 podman[264927]: 2026-02-02 11:39:53.669783696 +0000 UTC m=+0.148507846 container init e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 11:39:53 compute-0 podman[264927]: 2026-02-02 11:39:53.674597624 +0000 UTC m=+0.153321754 container start e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 11:39:53 compute-0 neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95[264943]: [NOTICE]   (264947) : New worker (264949) forked
Feb 02 11:39:53 compute-0 neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95[264943]: [NOTICE]   (264947) : Loading success.
Feb 02 11:39:54 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:54.110 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:39:54 compute-0 nova_compute[251290]: 2026-02-02 11:39:54.111 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:54 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:39:54.112 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:39:54 compute-0 ovn_controller[154901]: 2026-02-02T11:39:54Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:43:b5 10.100.0.29
Feb 02 11:39:54 compute-0 ovn_controller[154901]: 2026-02-02T11:39:54Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:43:b5 10.100.0.29
Feb 02 11:39:55 compute-0 nova_compute[251290]: 2026-02-02 11:39:55.288 251294 DEBUG nova.compute.manager [req-5acd2a63-f8e0-4aa7-a2c6-350fe0e67c9c req-ef0dccc1-72b2-4ee5-b98e-3419b1273d3e 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-plugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:39:55 compute-0 nova_compute[251290]: 2026-02-02 11:39:55.288 251294 DEBUG oslo_concurrency.lockutils [req-5acd2a63-f8e0-4aa7-a2c6-350fe0e67c9c req-ef0dccc1-72b2-4ee5-b98e-3419b1273d3e 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:39:55 compute-0 nova_compute[251290]: 2026-02-02 11:39:55.289 251294 DEBUG oslo_concurrency.lockutils [req-5acd2a63-f8e0-4aa7-a2c6-350fe0e67c9c req-ef0dccc1-72b2-4ee5-b98e-3419b1273d3e 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:39:55 compute-0 nova_compute[251290]: 2026-02-02 11:39:55.289 251294 DEBUG oslo_concurrency.lockutils [req-5acd2a63-f8e0-4aa7-a2c6-350fe0e67c9c req-ef0dccc1-72b2-4ee5-b98e-3419b1273d3e 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:39:55 compute-0 nova_compute[251290]: 2026-02-02 11:39:55.289 251294 DEBUG nova.compute.manager [req-5acd2a63-f8e0-4aa7-a2c6-350fe0e67c9c req-ef0dccc1-72b2-4ee5-b98e-3419b1273d3e 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] No waiting events found dispatching network-vif-plugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:39:55 compute-0 nova_compute[251290]: 2026-02-02 11:39:55.289 251294 WARNING nova.compute.manager [req-5acd2a63-f8e0-4aa7-a2c6-350fe0e67c9c req-ef0dccc1-72b2-4ee5-b98e-3419b1273d3e 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received unexpected event network-vif-plugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 for instance with vm_state active and task_state None.
Feb 02 11:39:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:55.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:55.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Feb 02 11:39:55 compute-0 nova_compute[251290]: 2026-02-02 11:39:55.573 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:55 compute-0 ceph-mon[74676]: pgmap v846: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Feb 02 11:39:55 compute-0 nova_compute[251290]: 2026-02-02 11:39:55.833 251294 DEBUG nova.network.neutron [req-e0f6d048-cf04-4e56-817f-3279d7dcc477 req-e8ba9063-2668-4a0d-b082-e80a510091cc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updated VIF entry in instance network info cache for port fb58aed2-3c97-4c85-8834-01bd422b3fd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:39:55 compute-0 nova_compute[251290]: 2026-02-02 11:39:55.834 251294 DEBUG nova.network.neutron [req-e0f6d048-cf04-4e56-817f-3279d7dcc477 req-e8ba9063-2668-4a0d-b082-e80a510091cc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:39:55 compute-0 nova_compute[251290]: 2026-02-02 11:39:55.850 251294 DEBUG oslo_concurrency.lockutils [req-e0f6d048-cf04-4e56-817f-3279d7dcc477 req-e8ba9063-2668-4a0d-b082-e80a510091cc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:39:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:39:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:39:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:39:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:39:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:39:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:39:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:56] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Feb 02 11:39:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:39:56] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Feb 02 11:39:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:57.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:57.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:39:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:57.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:39:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 5.7 KiB/s wr, 2 op/s
Feb 02 11:39:57 compute-0 nova_compute[251290]: 2026-02-02 11:39:57.855 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:39:58 compute-0 ceph-mon[74676]: pgmap v847: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 5.7 KiB/s wr, 2 op/s
Feb 02 11:39:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:39:58.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:39:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:39:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:39:59.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:39:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:39:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:39:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:39:59.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:39:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 1 op/s
Feb 02 11:39:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:39:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:39:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:39:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:39:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:39:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:39:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:39:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:40:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Feb 02 11:40:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Feb 02 11:40:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.jnnwjo on compute-1 is in error state
Feb 02 11:40:00 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:00.114 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:40:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:40:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5669 writes, 25K keys, 5669 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 5669 writes, 5669 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1502 writes, 6841 keys, 1502 commit groups, 1.0 writes per commit group, ingest: 11.33 MB, 0.02 MB/s
                                           Interval WAL: 1502 writes, 1502 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    114.4      0.35              0.08        14    0.025       0      0       0.0       0.0
                                             L6      1/0   11.99 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.1    127.3    108.7      1.52              0.31        13    0.117     67K   6905       0.0       0.0
                                            Sum      1/0   11.99 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   5.1    103.4    109.8      1.87              0.40        27    0.069     67K   6905       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1     98.0     98.7      0.88              0.18        12    0.073     34K   3089       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    127.3    108.7      1.52              0.31        13    0.117     67K   6905       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    115.4      0.35              0.08        13    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.039, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 1.9 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594e304b350#2 capacity: 304.00 MB usage: 14.05 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000169 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(810,13.51 MB,4.44388%) FilterBlock(28,199.55 KB,0.064102%) IndexBlock(28,357.27 KB,0.114767%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 11:40:00 compute-0 ceph-mon[74676]: pgmap v848: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 1 op/s
Feb 02 11:40:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:40:00 compute-0 ceph-mon[74676]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Feb 02 11:40:00 compute-0 ceph-mon[74676]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Feb 02 11:40:00 compute-0 ceph-mon[74676]:     daemon nfs.cephfs.0.0.compute-1.jnnwjo on compute-1 is in error state
Feb 02 11:40:00 compute-0 nova_compute[251290]: 2026-02-02 11:40:00.575 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:40:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:40:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:01.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 6.3 KiB/s wr, 2 op/s
Feb 02 11:40:01 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2756763249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:02 compute-0 ceph-mon[74676]: pgmap v849: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 6.3 KiB/s wr, 2 op/s
Feb 02 11:40:02 compute-0 nova_compute[251290]: 2026-02-02 11:40:02.858 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:03.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Feb 02 11:40:03 compute-0 ceph-mon[74676]: pgmap v850: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Feb 02 11:40:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:40:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:05.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:40:05 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3003791478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:40:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:05.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Feb 02 11:40:05 compute-0 nova_compute[251290]: 2026-02-02 11:40:05.584 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:06 compute-0 sudo[264970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:40:06 compute-0 sudo[264970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:06 compute-0 sudo[264970]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:06 compute-0 ceph-mon[74676]: pgmap v851: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Feb 02 11:40:06 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1476333778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:40:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:06] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Feb 02 11:40:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:06] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Feb 02 11:40:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:07.148Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:07.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 02 11:40:07 compute-0 nova_compute[251290]: 2026-02-02 11:40:07.860 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:08 compute-0 ceph-mon[74676]: pgmap v852: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 02 11:40:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:08.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:09.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:09.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:40:10 compute-0 ceph-mon[74676]: pgmap v853: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:40:10 compute-0 nova_compute[251290]: 2026-02-02 11:40:10.586 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:11.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:11.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 02 11:40:11 compute-0 ceph-mon[74676]: pgmap v854: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 02 11:40:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:12 compute-0 nova_compute[251290]: 2026-02-02 11:40:12.862 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:13.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:40:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:13.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:40:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:40:14 compute-0 ceph-mon[74676]: pgmap v855: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:40:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:40:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:40:15 compute-0 podman[265004]: 2026-02-02 11:40:15.264716427 +0000 UTC m=+0.053590997 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 02 11:40:15 compute-0 podman[265005]: 2026-02-02 11:40:15.330260545 +0000 UTC m=+0.115894722 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 11:40:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:15.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:15.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:40:15 compute-0 nova_compute[251290]: 2026-02-02 11:40:15.589 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:40:15 compute-0 ceph-mon[74676]: pgmap v856: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:40:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:16] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:40:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:16] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:40:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:17.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:17.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:17.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 02 11:40:17 compute-0 nova_compute[251290]: 2026-02-02 11:40:17.883 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:18 compute-0 ceph-mon[74676]: pgmap v857: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 02 11:40:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:18.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:19.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:40:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:19.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:40:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Feb 02 11:40:20 compute-0 ceph-mon[74676]: pgmap v858: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Feb 02 11:40:20 compute-0 nova_compute[251290]: 2026-02-02 11:40:20.593 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:21.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:21.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 02 11:40:21 compute-0 ceph-mon[74676]: pgmap v859: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 02 11:40:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:22.677 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:22.677 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:22.678 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:22 compute-0 nova_compute[251290]: 2026-02-02 11:40:22.884 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:23.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:23.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:40:24 compute-0 ceph-mon[74676]: pgmap v860: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:40:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:25.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:25.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:40:25 compute-0 nova_compute[251290]: 2026-02-02 11:40:25.602 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:26 compute-0 sudo[265058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:40:26 compute-0 sudo[265058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:26 compute-0 sudo[265058]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:26 compute-0 sudo[265083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:40:26 compute-0 sudo[265083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:26 compute-0 sudo[265107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:40:26 compute-0 sudo[265107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:26 compute-0 sudo[265107]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:26 compute-0 nova_compute[251290]: 2026-02-02 11:40:26.545 251294 DEBUG nova.compute.manager [req-2215cd48-c255-4758-89e4-932d12584009 req-6ce682de-b792-45ef-98fe-13b7c7f3f2bd 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-changed-fb58aed2-3c97-4c85-8834-01bd422b3fd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:40:26 compute-0 nova_compute[251290]: 2026-02-02 11:40:26.545 251294 DEBUG nova.compute.manager [req-2215cd48-c255-4758-89e4-932d12584009 req-6ce682de-b792-45ef-98fe-13b7c7f3f2bd 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Refreshing instance network info cache due to event network-changed-fb58aed2-3c97-4c85-8834-01bd422b3fd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:40:26 compute-0 nova_compute[251290]: 2026-02-02 11:40:26.546 251294 DEBUG oslo_concurrency.lockutils [req-2215cd48-c255-4758-89e4-932d12584009 req-6ce682de-b792-45ef-98fe-13b7c7f3f2bd 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:40:26 compute-0 nova_compute[251290]: 2026-02-02 11:40:26.546 251294 DEBUG oslo_concurrency.lockutils [req-2215cd48-c255-4758-89e4-932d12584009 req-6ce682de-b792-45ef-98fe-13b7c7f3f2bd 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:40:26 compute-0 nova_compute[251290]: 2026-02-02 11:40:26.546 251294 DEBUG nova.network.neutron [req-2215cd48-c255-4758-89e4-932d12584009 req-6ce682de-b792-45ef-98fe-13b7c7f3f2bd 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Refreshing network info cache for port fb58aed2-3c97-4c85-8834-01bd422b3fd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:40:26 compute-0 ceph-mon[74676]: pgmap v861: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:40:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:26 compute-0 sudo[265083]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:40:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:40:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:40:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:40:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 2.3 MiB/s wr, 69 op/s
Feb 02 11:40:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:40:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:40:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:40:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:40:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:40:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:40:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:40:26 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:40:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:40:26 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:40:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:26] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:40:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:26] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:40:27 compute-0 sudo[265163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:40:27 compute-0 sudo[265163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:27 compute-0 sudo[265163]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:27 compute-0 sudo[265188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:40:27 compute-0 sudo[265188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:27.151Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:40:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:27.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:40:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:27.152Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:27.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:27 compute-0 podman[265252]: 2026-02-02 11:40:27.457894844 +0000 UTC m=+0.041850580 container create 64932d5258e20627d6b6503958fdf1a751b2e5b777bf23319e1c1279ec3b1a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_payne, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:40:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:27.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:27 compute-0 systemd[1]: Started libpod-conmon-64932d5258e20627d6b6503958fdf1a751b2e5b777bf23319e1c1279ec3b1a8a.scope.
Feb 02 11:40:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:40:27 compute-0 podman[265252]: 2026-02-02 11:40:27.43889135 +0000 UTC m=+0.022847116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:40:27 compute-0 podman[265252]: 2026-02-02 11:40:27.544504336 +0000 UTC m=+0.128460102 container init 64932d5258e20627d6b6503958fdf1a751b2e5b777bf23319e1c1279ec3b1a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_payne, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb 02 11:40:27 compute-0 podman[265252]: 2026-02-02 11:40:27.552111094 +0000 UTC m=+0.136066850 container start 64932d5258e20627d6b6503958fdf1a751b2e5b777bf23319e1c1279ec3b1a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_payne, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:40:27 compute-0 podman[265252]: 2026-02-02 11:40:27.556071297 +0000 UTC m=+0.140027033 container attach 64932d5258e20627d6b6503958fdf1a751b2e5b777bf23319e1c1279ec3b1a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:40:27 compute-0 nervous_payne[265268]: 167 167
Feb 02 11:40:27 compute-0 systemd[1]: libpod-64932d5258e20627d6b6503958fdf1a751b2e5b777bf23319e1c1279ec3b1a8a.scope: Deactivated successfully.
Feb 02 11:40:27 compute-0 podman[265252]: 2026-02-02 11:40:27.558632881 +0000 UTC m=+0.142588617 container died 64932d5258e20627d6b6503958fdf1a751b2e5b777bf23319e1c1279ec3b1a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_payne, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-10533a9db202f80c04dfb69f5b37942806d2efdb0a32c6ecfd1df5715e8a46d7-merged.mount: Deactivated successfully.
Feb 02 11:40:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:40:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:40:27 compute-0 ceph-mon[74676]: pgmap v862: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 2.3 MiB/s wr, 69 op/s
Feb 02 11:40:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:40:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:40:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:40:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:40:27 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:40:27 compute-0 podman[265252]: 2026-02-02 11:40:27.606681027 +0000 UTC m=+0.190636763 container remove 64932d5258e20627d6b6503958fdf1a751b2e5b777bf23319e1c1279ec3b1a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_payne, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:40:27 compute-0 systemd[1]: libpod-conmon-64932d5258e20627d6b6503958fdf1a751b2e5b777bf23319e1c1279ec3b1a8a.scope: Deactivated successfully.
Feb 02 11:40:27 compute-0 podman[265295]: 2026-02-02 11:40:27.769974206 +0000 UTC m=+0.041614443 container create d5bc1141a93619cdf831420af72d84297863fae3dc0b32d7b444ea489699edbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:40:27 compute-0 systemd[1]: Started libpod-conmon-d5bc1141a93619cdf831420af72d84297863fae3dc0b32d7b444ea489699edbf.scope.
Feb 02 11:40:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5baca5e583203b3076019bd84a258e3c962968118638e96c2a4b98697108bcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5baca5e583203b3076019bd84a258e3c962968118638e96c2a4b98697108bcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5baca5e583203b3076019bd84a258e3c962968118638e96c2a4b98697108bcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5baca5e583203b3076019bd84a258e3c962968118638e96c2a4b98697108bcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5baca5e583203b3076019bd84a258e3c962968118638e96c2a4b98697108bcf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:27 compute-0 podman[265295]: 2026-02-02 11:40:27.845698146 +0000 UTC m=+0.117338413 container init d5bc1141a93619cdf831420af72d84297863fae3dc0b32d7b444ea489699edbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_black, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:40:27 compute-0 podman[265295]: 2026-02-02 11:40:27.751008563 +0000 UTC m=+0.022648820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:40:27 compute-0 podman[265295]: 2026-02-02 11:40:27.853133479 +0000 UTC m=+0.124773716 container start d5bc1141a93619cdf831420af72d84297863fae3dc0b32d7b444ea489699edbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_black, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:40:27 compute-0 podman[265295]: 2026-02-02 11:40:27.857645898 +0000 UTC m=+0.129286135 container attach d5bc1141a93619cdf831420af72d84297863fae3dc0b32d7b444ea489699edbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_black, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:40:28 compute-0 nova_compute[251290]: 2026-02-02 11:40:28.007 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:28 compute-0 fervent_black[265312]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:40:28 compute-0 fervent_black[265312]: --> All data devices are unavailable
Feb 02 11:40:28 compute-0 systemd[1]: libpod-d5bc1141a93619cdf831420af72d84297863fae3dc0b32d7b444ea489699edbf.scope: Deactivated successfully.
Feb 02 11:40:28 compute-0 podman[265295]: 2026-02-02 11:40:28.172685743 +0000 UTC m=+0.444325980 container died d5bc1141a93619cdf831420af72d84297863fae3dc0b32d7b444ea489699edbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:40:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5baca5e583203b3076019bd84a258e3c962968118638e96c2a4b98697108bcf-merged.mount: Deactivated successfully.
Feb 02 11:40:28 compute-0 podman[265295]: 2026-02-02 11:40:28.220185634 +0000 UTC m=+0.491825871 container remove d5bc1141a93619cdf831420af72d84297863fae3dc0b32d7b444ea489699edbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:40:28 compute-0 systemd[1]: libpod-conmon-d5bc1141a93619cdf831420af72d84297863fae3dc0b32d7b444ea489699edbf.scope: Deactivated successfully.
Feb 02 11:40:28 compute-0 sudo[265188]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:28 compute-0 sudo[265339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:40:28 compute-0 sudo[265339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:28 compute-0 sudo[265339]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:28 compute-0 sudo[265364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:40:28 compute-0 sudo[265364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:28 compute-0 podman[265431]: 2026-02-02 11:40:28.77847751 +0000 UTC m=+0.046610626 container create 4a3cc5aa226322aa9f26cea996c9e4a11a54243664fb2b1ed0b09181faed15ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:40:28 compute-0 systemd[1]: Started libpod-conmon-4a3cc5aa226322aa9f26cea996c9e4a11a54243664fb2b1ed0b09181faed15ef.scope.
Feb 02 11:40:28 compute-0 podman[265431]: 2026-02-02 11:40:28.756390997 +0000 UTC m=+0.024524143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:40:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:40:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:28.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:28 compute-0 podman[265431]: 2026-02-02 11:40:28.87447293 +0000 UTC m=+0.142606066 container init 4a3cc5aa226322aa9f26cea996c9e4a11a54243664fb2b1ed0b09181faed15ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:40:28 compute-0 podman[265431]: 2026-02-02 11:40:28.881404849 +0000 UTC m=+0.149537965 container start 4a3cc5aa226322aa9f26cea996c9e4a11a54243664fb2b1ed0b09181faed15ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:40:28 compute-0 frosty_germain[265447]: 167 167
Feb 02 11:40:28 compute-0 systemd[1]: libpod-4a3cc5aa226322aa9f26cea996c9e4a11a54243664fb2b1ed0b09181faed15ef.scope: Deactivated successfully.
Feb 02 11:40:28 compute-0 conmon[265447]: conmon 4a3cc5aa226322aa9f26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a3cc5aa226322aa9f26cea996c9e4a11a54243664fb2b1ed0b09181faed15ef.scope/container/memory.events
Feb 02 11:40:28 compute-0 podman[265431]: 2026-02-02 11:40:28.888126831 +0000 UTC m=+0.156259967 container attach 4a3cc5aa226322aa9f26cea996c9e4a11a54243664fb2b1ed0b09181faed15ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:40:28 compute-0 podman[265431]: 2026-02-02 11:40:28.888598615 +0000 UTC m=+0.156731741 container died 4a3cc5aa226322aa9f26cea996c9e4a11a54243664fb2b1ed0b09181faed15ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:40:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Feb 02 11:40:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5afa02fc3ee1ae4631436a4289c217920157a9f5715fe06404c1212c85e15963-merged.mount: Deactivated successfully.
Feb 02 11:40:28 compute-0 podman[265431]: 2026-02-02 11:40:28.9746299 +0000 UTC m=+0.242763006 container remove 4a3cc5aa226322aa9f26cea996c9e4a11a54243664fb2b1ed0b09181faed15ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:40:29 compute-0 systemd[1]: libpod-conmon-4a3cc5aa226322aa9f26cea996c9e4a11a54243664fb2b1ed0b09181faed15ef.scope: Deactivated successfully.
Feb 02 11:40:29 compute-0 podman[265472]: 2026-02-02 11:40:29.119286715 +0000 UTC m=+0.046990808 container create 86f277d59a80280f6d800123f28c998d7792975f3fe4244ec9eb555f180edd33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:40:29 compute-0 nova_compute[251290]: 2026-02-02 11:40:29.138 251294 DEBUG nova.network.neutron [req-2215cd48-c255-4758-89e4-932d12584009 req-6ce682de-b792-45ef-98fe-13b7c7f3f2bd 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updated VIF entry in instance network info cache for port fb58aed2-3c97-4c85-8834-01bd422b3fd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:40:29 compute-0 nova_compute[251290]: 2026-02-02 11:40:29.138 251294 DEBUG nova.network.neutron [req-2215cd48-c255-4758-89e4-932d12584009 req-6ce682de-b792-45ef-98fe-13b7c7f3f2bd 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:40:29 compute-0 systemd[1]: Started libpod-conmon-86f277d59a80280f6d800123f28c998d7792975f3fe4244ec9eb555f180edd33.scope.
Feb 02 11:40:29 compute-0 nova_compute[251290]: 2026-02-02 11:40:29.170 251294 DEBUG oslo_concurrency.lockutils [req-2215cd48-c255-4758-89e4-932d12584009 req-6ce682de-b792-45ef-98fe-13b7c7f3f2bd 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:40:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:40:29 compute-0 podman[265472]: 2026-02-02 11:40:29.097520181 +0000 UTC m=+0.025224304 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d3c235ce8324183c454d2271c487aac9caaae86189bbb07d44eaf6ab4eeef0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d3c235ce8324183c454d2271c487aac9caaae86189bbb07d44eaf6ab4eeef0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d3c235ce8324183c454d2271c487aac9caaae86189bbb07d44eaf6ab4eeef0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d3c235ce8324183c454d2271c487aac9caaae86189bbb07d44eaf6ab4eeef0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:29 compute-0 podman[265472]: 2026-02-02 11:40:29.215351427 +0000 UTC m=+0.143055540 container init 86f277d59a80280f6d800123f28c998d7792975f3fe4244ec9eb555f180edd33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_vaughan, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb 02 11:40:29 compute-0 podman[265472]: 2026-02-02 11:40:29.221669598 +0000 UTC m=+0.149373691 container start 86f277d59a80280f6d800123f28c998d7792975f3fe4244ec9eb555f180edd33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:40:29 compute-0 podman[265472]: 2026-02-02 11:40:29.226915368 +0000 UTC m=+0.154619461 container attach 86f277d59a80280f6d800123f28c998d7792975f3fe4244ec9eb555f180edd33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_vaughan, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:40:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:29.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:29.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]: {
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:     "1": [
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:         {
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "devices": [
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "/dev/loop3"
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             ],
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "lv_name": "ceph_lv0",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "lv_size": "21470642176",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "name": "ceph_lv0",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "tags": {
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.cluster_name": "ceph",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.crush_device_class": "",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.encrypted": "0",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.osd_id": "1",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.type": "block",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.vdo": "0",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:                 "ceph.with_tpm": "0"
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             },
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "type": "block",
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:             "vg_name": "ceph_vg0"
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:         }
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]:     ]
Feb 02 11:40:29 compute-0 beautiful_vaughan[265489]: }
Feb 02 11:40:29 compute-0 systemd[1]: libpod-86f277d59a80280f6d800123f28c998d7792975f3fe4244ec9eb555f180edd33.scope: Deactivated successfully.
Feb 02 11:40:29 compute-0 podman[265472]: 2026-02-02 11:40:29.525875644 +0000 UTC m=+0.453579737 container died 86f277d59a80280f6d800123f28c998d7792975f3fe4244ec9eb555f180edd33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-81d3c235ce8324183c454d2271c487aac9caaae86189bbb07d44eaf6ab4eeef0-merged.mount: Deactivated successfully.
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:40:29
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'volumes', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.control', '.nfs', 'vms', 'cephfs.cephfs.data', 'images']
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:40:29 compute-0 podman[265472]: 2026-02-02 11:40:29.571300155 +0000 UTC m=+0.499004248 container remove 86f277d59a80280f6d800123f28c998d7792975f3fe4244ec9eb555f180edd33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Feb 02 11:40:29 compute-0 systemd[1]: libpod-conmon-86f277d59a80280f6d800123f28c998d7792975f3fe4244ec9eb555f180edd33.scope: Deactivated successfully.
Feb 02 11:40:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:40:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:40:29 compute-0 sudo[265364]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:40:29 compute-0 sudo[265511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:40:29 compute-0 sudo[265511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:29 compute-0 sudo[265511]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:29 compute-0 sudo[265536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:40:29 compute-0 sudo[265536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015208353682359835 of space, bias 1.0, pg target 0.45625061047079507 quantized to 32 (current 32)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:40:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:40:30 compute-0 podman[265603]: 2026-02-02 11:40:30.147651318 +0000 UTC m=+0.047697437 container create b47110c82954aa034f866af1c23a876b70d4d5f646ca37962bbe77388a43e760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_davinci, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:40:30 compute-0 systemd[1]: Started libpod-conmon-b47110c82954aa034f866af1c23a876b70d4d5f646ca37962bbe77388a43e760.scope.
Feb 02 11:40:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:40:30 compute-0 ceph-mon[74676]: pgmap v863: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Feb 02 11:40:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:40:30 compute-0 podman[265603]: 2026-02-02 11:40:30.121287293 +0000 UTC m=+0.021333512 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:40:30 compute-0 podman[265603]: 2026-02-02 11:40:30.225592241 +0000 UTC m=+0.125638390 container init b47110c82954aa034f866af1c23a876b70d4d5f646ca37962bbe77388a43e760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_davinci, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:40:30 compute-0 podman[265603]: 2026-02-02 11:40:30.232127589 +0000 UTC m=+0.132173708 container start b47110c82954aa034f866af1c23a876b70d4d5f646ca37962bbe77388a43e760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_davinci, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:40:30 compute-0 podman[265603]: 2026-02-02 11:40:30.236868225 +0000 UTC m=+0.136914364 container attach b47110c82954aa034f866af1c23a876b70d4d5f646ca37962bbe77388a43e760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb 02 11:40:30 compute-0 hardcore_davinci[265619]: 167 167
Feb 02 11:40:30 compute-0 systemd[1]: libpod-b47110c82954aa034f866af1c23a876b70d4d5f646ca37962bbe77388a43e760.scope: Deactivated successfully.
Feb 02 11:40:30 compute-0 podman[265603]: 2026-02-02 11:40:30.239129669 +0000 UTC m=+0.139175788 container died b47110c82954aa034f866af1c23a876b70d4d5f646ca37962bbe77388a43e760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_davinci, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b7a98d8580f3a53f47aff9197d5b550cb035213ca24e5b8fa2f78e8638c8077-merged.mount: Deactivated successfully.
Feb 02 11:40:30 compute-0 podman[265603]: 2026-02-02 11:40:30.281698689 +0000 UTC m=+0.181744808 container remove b47110c82954aa034f866af1c23a876b70d4d5f646ca37962bbe77388a43e760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:40:30 compute-0 systemd[1]: libpod-conmon-b47110c82954aa034f866af1c23a876b70d4d5f646ca37962bbe77388a43e760.scope: Deactivated successfully.
Feb 02 11:40:30 compute-0 podman[265643]: 2026-02-02 11:40:30.431323526 +0000 UTC m=+0.039262406 container create 61a8e07293976a8110d3be064dd4841dfb8b83e5b0da73097c3bee59e9b4b97e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:40:30 compute-0 systemd[1]: Started libpod-conmon-61a8e07293976a8110d3be064dd4841dfb8b83e5b0da73097c3bee59e9b4b97e.scope.
Feb 02 11:40:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3a06a9b48c9d2255ecdf5670458e3414edd74591320237a4de33b958f2b4d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3a06a9b48c9d2255ecdf5670458e3414edd74591320237a4de33b958f2b4d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3a06a9b48c9d2255ecdf5670458e3414edd74591320237a4de33b958f2b4d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3a06a9b48c9d2255ecdf5670458e3414edd74591320237a4de33b958f2b4d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:40:30 compute-0 podman[265643]: 2026-02-02 11:40:30.414960207 +0000 UTC m=+0.022899107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:40:30 compute-0 podman[265643]: 2026-02-02 11:40:30.516714362 +0000 UTC m=+0.124653272 container init 61a8e07293976a8110d3be064dd4841dfb8b83e5b0da73097c3bee59e9b4b97e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaum, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb 02 11:40:30 compute-0 podman[265643]: 2026-02-02 11:40:30.524716712 +0000 UTC m=+0.132655592 container start 61a8e07293976a8110d3be064dd4841dfb8b83e5b0da73097c3bee59e9b4b97e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:40:30 compute-0 podman[265643]: 2026-02-02 11:40:30.529130558 +0000 UTC m=+0.137069458 container attach 61a8e07293976a8110d3be064dd4841dfb8b83e5b0da73097c3bee59e9b4b97e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:40:30 compute-0 nova_compute[251290]: 2026-02-02 11:40:30.605 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 2.3 MiB/s wr, 69 op/s
Feb 02 11:40:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:31 compute-0 lvm[265735]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:40:31 compute-0 lvm[265735]: VG ceph_vg0 finished
Feb 02 11:40:31 compute-0 gifted_chaum[265659]: {}
Feb 02 11:40:31 compute-0 systemd[1]: libpod-61a8e07293976a8110d3be064dd4841dfb8b83e5b0da73097c3bee59e9b4b97e.scope: Deactivated successfully.
Feb 02 11:40:31 compute-0 podman[265643]: 2026-02-02 11:40:31.297906645 +0000 UTC m=+0.905845555 container died 61a8e07293976a8110d3be064dd4841dfb8b83e5b0da73097c3bee59e9b4b97e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 11:40:31 compute-0 systemd[1]: libpod-61a8e07293976a8110d3be064dd4841dfb8b83e5b0da73097c3bee59e9b4b97e.scope: Consumed 1.128s CPU time.
Feb 02 11:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3a06a9b48c9d2255ecdf5670458e3414edd74591320237a4de33b958f2b4d7-merged.mount: Deactivated successfully.
Feb 02 11:40:31 compute-0 podman[265643]: 2026-02-02 11:40:31.360526008 +0000 UTC m=+0.968464898 container remove 61a8e07293976a8110d3be064dd4841dfb8b83e5b0da73097c3bee59e9b4b97e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaum, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 11:40:31 compute-0 systemd[1]: libpod-conmon-61a8e07293976a8110d3be064dd4841dfb8b83e5b0da73097c3bee59e9b4b97e.scope: Deactivated successfully.
Feb 02 11:40:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:31.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:31 compute-0 sudo[265536]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:40:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:40:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:40:31 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:40:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:31.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:31 compute-0 sudo[265751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:40:31 compute-0 sudo[265751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:31 compute-0 sudo[265751]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:32 compute-0 nova_compute[251290]: 2026-02-02 11:40:32.042 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:40:32 compute-0 ceph-mon[74676]: pgmap v864: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 2.3 MiB/s wr, 69 op/s
Feb 02 11:40:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/742219464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:32 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:40:32 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:40:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb 02 11:40:33 compute-0 nova_compute[251290]: 2026-02-02 11:40:33.010 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3011736181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/869080524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:33.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:33.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:34 compute-0 nova_compute[251290]: 2026-02-02 11:40:34.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:40:34 compute-0 nova_compute[251290]: 2026-02-02 11:40:34.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:40:34 compute-0 ceph-mon[74676]: pgmap v865: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb 02 11:40:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/462073105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.037 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.037 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.037 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.038 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.038 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:40:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/409970858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:35.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:35.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:40:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1942378106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.515 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.579 251294 DEBUG nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.580 251294 DEBUG nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.609 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.753 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.754 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4278MB free_disk=59.89704895019531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.754 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.754 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.840 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Instance ccf853f9-d90e-46b8-85a2-b47f8fc8585e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.841 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.841 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.864 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing inventories for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.881 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating ProviderTree inventory for provider 92919e7b-7846-4645-9401-9fd55bbbf435 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.882 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.896 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing aggregate associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.928 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing trait associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 11:40:35 compute-0 nova_compute[251290]: 2026-02-02 11:40:35.966 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:40:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.112 251294 DEBUG oslo_concurrency.lockutils [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "interface-ccf853f9-d90e-46b8-85a2-b47f8fc8585e-fb58aed2-3c97-4c85-8834-01bd422b3fd4" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.113 251294 DEBUG oslo_concurrency.lockutils [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "interface-ccf853f9-d90e-46b8-85a2-b47f8fc8585e-fb58aed2-3c97-4c85-8834-01bd422b3fd4" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.131 251294 DEBUG nova.objects.instance [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'flavor' on Instance uuid ccf853f9-d90e-46b8-85a2-b47f8fc8585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.157 251294 DEBUG nova.virt.libvirt.vif [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:39:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1247938665',display_name='tempest-TestNetworkBasicOps-server-1247938665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1247938665',id=6,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqof0F9JyUHVT5hMQ79Cok6KP6MdurBGSPOHnHzHbYwvsbknGW+nDBswDzr2outcMRHfbDibxWIVbtW1Oyvmxl+ktSqoqRtSqEKI2u6qUhtldlE4mRfShljqjZtZT75TA==',key_name='tempest-TestNetworkBasicOps-1477889364',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-50sxw0dy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:39:21Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=ccf853f9-d90e-46b8-85a2-b47f8fc8585e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.158 251294 DEBUG nova.network.os_vif_util [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.159 251294 DEBUG nova.network.os_vif_util [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.162 251294 DEBUG nova.virt.libvirt.guest [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4a:43:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb58aed2-3c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.165 251294 DEBUG nova.virt.libvirt.guest [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4a:43:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb58aed2-3c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.169 251294 DEBUG nova.virt.libvirt.driver [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Attempting to detach device tapfb58aed2-3c from instance ccf853f9-d90e-46b8-85a2-b47f8fc8585e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.170 251294 DEBUG nova.virt.libvirt.guest [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] detach device xml: <interface type="ethernet">
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <mac address="fa:16:3e:4a:43:b5"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <model type="virtio"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <mtu size="1442"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <target dev="tapfb58aed2-3c"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]: </interface>
Feb 02 11:40:36 compute-0 nova_compute[251290]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.177 251294 DEBUG nova.virt.libvirt.guest [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4a:43:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb58aed2-3c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.181 251294 DEBUG nova.virt.libvirt.guest [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4a:43:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb58aed2-3c"/></interface>not found in domain: <domain type='kvm' id='3'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <name>instance-00000006</name>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <uuid>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</uuid>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-1247938665</nova:name>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:39:52</nova:creationTime>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:port uuid="e235e7e6-e897-4b5c-80c9-036612ca0aa0">
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:port uuid="fb58aed2-3c97-4c85-8834-01bd422b3fd4">
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:40:36 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <memory unit='KiB'>131072</memory>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <vcpu placement='static'>1</vcpu>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <resource>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <partition>/machine</partition>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </resource>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <sysinfo type='smbios'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <system>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='manufacturer'>RDO</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='product'>OpenStack Compute</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='serial'>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='uuid'>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='family'>Virtual Machine</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </system>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <os>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <boot dev='hd'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <smbios mode='sysinfo'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </os>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <features>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <vmcoreinfo state='on'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </features>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <cpu mode='custom' match='exact' check='full'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <vendor>AMD</vendor>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='x2apic'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc-deadline'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='hypervisor'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc_adjust'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='spec-ctrl'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='stibp'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='ssbd'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='cmp_legacy'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='overflow-recov'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='succor'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='ibrs'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='amd-ssbd'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='virt-ssbd'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='lbrv'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='tsc-scale'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='vmcb-clean'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='flushbyasid'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='pause-filter'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='pfthreshold'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='xsaves'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='svm'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='topoext'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='npt'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='nrip-save'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <clock offset='utc'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <timer name='pit' tickpolicy='delay'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <timer name='hpet' present='no'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <on_poweroff>destroy</on_poweroff>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <on_reboot>restart</on_reboot>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <on_crash>destroy</on_crash>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <disk type='network' device='disk'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk' index='2'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       </source>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target dev='vda' bus='virtio'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='virtio-disk0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <disk type='network' device='cdrom'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk.config' index='1'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       </source>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target dev='sda' bus='sata'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <readonly/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='sata0-0-0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='0' model='pcie-root'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pcie.0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='1' port='0x10'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='2' port='0x11'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='3' port='0x12'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.3'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='4' port='0x13'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.4'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='5' port='0x14'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.5'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='6' port='0x15'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.6'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='7' port='0x16'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.7'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='8' port='0x17'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.8'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='9' port='0x18'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.9'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='10' port='0x19'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.10'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='11' port='0x1a'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.11'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='12' port='0x1b'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.12'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='13' port='0x1c'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.13'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='14' port='0x1d'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.14'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='15' port='0x1e'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.15'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='16' port='0x1f'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.16'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='17' port='0x20'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.17'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='18' port='0x21'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.18'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='19' port='0x22'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.19'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='20' port='0x23'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.20'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='21' port='0x24'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.21'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='22' port='0x25'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.22'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='23' port='0x26'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.23'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='24' port='0x27'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.24'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='25' port='0x28'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.25'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-pci-bridge'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.26'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='usb'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='sata' index='0'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='ide'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <interface type='ethernet'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <mac address='fa:16:3e:83:bd:9e'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target dev='tape235e7e6-e8'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model type='virtio'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <driver name='vhost' rx_queue_size='512'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <mtu size='1442'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='net0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <interface type='ethernet'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <mac address='fa:16:3e:4a:43:b5'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target dev='tapfb58aed2-3c'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model type='virtio'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <driver name='vhost' rx_queue_size='512'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <mtu size='1442'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='net1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <serial type='pty'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/console.log' append='off'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target type='isa-serial' port='0'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <model name='isa-serial'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       </target>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <console type='pty' tty='/dev/pts/0'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/console.log' append='off'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target type='serial' port='0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </console>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <input type='tablet' bus='usb'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='input0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='usb' bus='0' port='1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <input type='mouse' bus='ps2'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='input1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <input type='keyboard' bus='ps2'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='input2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <listen type='address' address='::0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <audio id='1' type='none'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <video>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model type='virtio' heads='1' primary='yes'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='video0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </video>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <watchdog model='itco' action='reset'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='watchdog0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </watchdog>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <memballoon model='virtio'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <stats period='10'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='balloon0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <rng model='virtio'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <backend model='random'>/dev/urandom</backend>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='rng0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <label>system_u:system_r:svirt_t:s0:c582,c1002</label>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c582,c1002</imagelabel>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <label>+107:+107</label>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <imagelabel>+107:+107</imagelabel>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:40:36 compute-0 nova_compute[251290]: </domain>
Feb 02 11:40:36 compute-0 nova_compute[251290]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.183 251294 INFO nova.virt.libvirt.driver [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully detached device tapfb58aed2-3c from instance ccf853f9-d90e-46b8-85a2-b47f8fc8585e from the persistent domain config.
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.183 251294 DEBUG nova.virt.libvirt.driver [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] (1/8): Attempting to detach device tapfb58aed2-3c with device alias net1 from instance ccf853f9-d90e-46b8-85a2-b47f8fc8585e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.184 251294 DEBUG nova.virt.libvirt.guest [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] detach device xml: <interface type="ethernet">
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <mac address="fa:16:3e:4a:43:b5"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <model type="virtio"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <mtu size="1442"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <target dev="tapfb58aed2-3c"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]: </interface>
Feb 02 11:40:36 compute-0 nova_compute[251290]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 11:40:36 compute-0 kernel: tapfb58aed2-3c (unregistering): left promiscuous mode
Feb 02 11:40:36 compute-0 NetworkManager[49067]: <info>  [1770032436.2804] device (tapfb58aed2-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.295 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:36 compute-0 ovn_controller[154901]: 2026-02-02T11:40:36Z|00070|binding|INFO|Releasing lport fb58aed2-3c97-4c85-8834-01bd422b3fd4 from this chassis (sb_readonly=0)
Feb 02 11:40:36 compute-0 ovn_controller[154901]: 2026-02-02T11:40:36Z|00071|binding|INFO|Setting lport fb58aed2-3c97-4c85-8834-01bd422b3fd4 down in Southbound
Feb 02 11:40:36 compute-0 ovn_controller[154901]: 2026-02-02T11:40:36Z|00072|binding|INFO|Removing iface tapfb58aed2-3c ovn-installed in OVS
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.302 251294 DEBUG nova.virt.libvirt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Received event <DeviceRemovedEvent: 1770032436.3023453, ccf853f9-d90e-46b8-85a2-b47f8fc8585e => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.303 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:43:b5 10.100.0.29', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': 'ccf853f9-d90e-46b8-85a2-b47f8fc8585e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed74a638-b21d-4bea-a72e-473ea537cd95', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ed41d58-bb56-40d9-abe4-e24417089c0a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=fb58aed2-3c97-4c85-8834-01bd422b3fd4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.305 165304 INFO neutron.agent.ovn.metadata.agent [-] Port fb58aed2-3c97-4c85-8834-01bd422b3fd4 in datapath ed74a638-b21d-4bea-a72e-473ea537cd95 unbound from our chassis
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.306 165304 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed74a638-b21d-4bea-a72e-473ea537cd95, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.308 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[76ee116d-3d3f-486f-8cce-e8accdafab05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.308 165304 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95 namespace which is not needed anymore
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.310 251294 DEBUG nova.virt.libvirt.driver [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Start waiting for the detach event from libvirt for device tapfb58aed2-3c with device alias net1 for instance ccf853f9-d90e-46b8-85a2-b47f8fc8585e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.311 251294 DEBUG nova.virt.libvirt.guest [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4a:43:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb58aed2-3c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.312 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.321 251294 DEBUG nova.virt.libvirt.guest [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4a:43:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb58aed2-3c"/></interface>not found in domain: <domain type='kvm' id='3'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <name>instance-00000006</name>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <uuid>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</uuid>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-1247938665</nova:name>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:39:52</nova:creationTime>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:port uuid="e235e7e6-e897-4b5c-80c9-036612ca0aa0">
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:port uuid="fb58aed2-3c97-4c85-8834-01bd422b3fd4">
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:40:36 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <memory unit='KiB'>131072</memory>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <vcpu placement='static'>1</vcpu>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <resource>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <partition>/machine</partition>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </resource>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <sysinfo type='smbios'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <system>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='manufacturer'>RDO</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='product'>OpenStack Compute</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='serial'>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='uuid'>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <entry name='family'>Virtual Machine</entry>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </system>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <os>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <boot dev='hd'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <smbios mode='sysinfo'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </os>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <features>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <vmcoreinfo state='on'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </features>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <cpu mode='custom' match='exact' check='full'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <vendor>AMD</vendor>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='x2apic'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc-deadline'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='hypervisor'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc_adjust'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='spec-ctrl'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='stibp'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='ssbd'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='cmp_legacy'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='overflow-recov'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='succor'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='ibrs'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='amd-ssbd'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='virt-ssbd'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='lbrv'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='tsc-scale'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='vmcb-clean'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='flushbyasid'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='pause-filter'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='pfthreshold'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='xsaves'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='svm'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='require' name='topoext'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='npt'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <feature policy='disable' name='nrip-save'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <clock offset='utc'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <timer name='pit' tickpolicy='delay'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <timer name='hpet' present='no'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <on_poweroff>destroy</on_poweroff>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <on_reboot>restart</on_reboot>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <on_crash>destroy</on_crash>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <disk type='network' device='disk'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk' index='2'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       </source>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target dev='vda' bus='virtio'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='virtio-disk0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <disk type='network' device='cdrom'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk.config' index='1'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       </source>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target dev='sda' bus='sata'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <readonly/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='sata0-0-0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='0' model='pcie-root'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pcie.0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='1' port='0x10'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='2' port='0x11'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='3' port='0x12'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.3'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='4' port='0x13'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.4'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='5' port='0x14'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.5'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='6' port='0x15'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.6'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='7' port='0x16'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.7'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='8' port='0x17'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.8'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='9' port='0x18'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.9'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='10' port='0x19'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.10'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='11' port='0x1a'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.11'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='12' port='0x1b'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.12'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='13' port='0x1c'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.13'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='14' port='0x1d'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.14'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='15' port='0x1e'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.15'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='16' port='0x1f'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.16'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='17' port='0x20'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.17'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='18' port='0x21'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.18'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='19' port='0x22'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.19'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='20' port='0x23'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.20'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='21' port='0x24'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.21'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='22' port='0x25'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.22'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='23' port='0x26'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.23'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='24' port='0x27'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.24'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target chassis='25' port='0x28'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.25'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model name='pcie-pci-bridge'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='pci.26'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='usb'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <controller type='sata' index='0'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='ide'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <interface type='ethernet'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <mac address='fa:16:3e:83:bd:9e'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target dev='tape235e7e6-e8'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model type='virtio'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <driver name='vhost' rx_queue_size='512'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <mtu size='1442'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='net0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <serial type='pty'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/console.log' append='off'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target type='isa-serial' port='0'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:         <model name='isa-serial'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       </target>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <console type='pty' tty='/dev/pts/0'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/console.log' append='off'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <target type='serial' port='0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </console>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <input type='tablet' bus='usb'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='input0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='usb' bus='0' port='1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <input type='mouse' bus='ps2'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='input1'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <input type='keyboard' bus='ps2'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='input2'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <listen type='address' address='::0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <audio id='1' type='none'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <video>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <model type='virtio' heads='1' primary='yes'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='video0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </video>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <watchdog model='itco' action='reset'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='watchdog0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </watchdog>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <memballoon model='virtio'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <stats period='10'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='balloon0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <rng model='virtio'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <backend model='random'>/dev/urandom</backend>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <alias name='rng0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <label>system_u:system_r:svirt_t:s0:c582,c1002</label>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c582,c1002</imagelabel>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <label>+107:+107</label>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <imagelabel>+107:+107</imagelabel>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:40:36 compute-0 nova_compute[251290]: </domain>
Feb 02 11:40:36 compute-0 nova_compute[251290]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.324 251294 INFO nova.virt.libvirt.driver [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully detached device tapfb58aed2-3c from instance ccf853f9-d90e-46b8-85a2-b47f8fc8585e from the live domain config.
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.325 251294 DEBUG nova.virt.libvirt.vif [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:39:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1247938665',display_name='tempest-TestNetworkBasicOps-server-1247938665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1247938665',id=6,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqof0F9JyUHVT5hMQ79Cok6KP6MdurBGSPOHnHzHbYwvsbknGW+nDBswDzr2outcMRHfbDibxWIVbtW1Oyvmxl+ktSqoqRtSqEKI2u6qUhtldlE4mRfShljqjZtZT75TA==',key_name='tempest-TestNetworkBasicOps-1477889364',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-50sxw0dy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:39:21Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=ccf853f9-d90e-46b8-85a2-b47f8fc8585e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.325 251294 DEBUG nova.network.os_vif_util [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.326 251294 DEBUG nova.network.os_vif_util [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.326 251294 DEBUG os_vif [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.331 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.331 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb58aed2-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.333 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.338 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:40:36 compute-0 ceph-mon[74676]: pgmap v866: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb 02 11:40:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1942378106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.342 251294 INFO os_vif [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c')
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.343 251294 DEBUG nova.virt.libvirt.guest [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-1247938665</nova:name>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:40:36</nova:creationTime>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     <nova:port uuid="e235e7e6-e897-4b5c-80c9-036612ca0aa0">
Feb 02 11:40:36 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 11:40:36 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:40:36 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:40:36 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:40:36 compute-0 nova_compute[251290]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 02 11:40:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:40:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1131418535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:36 compute-0 neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95[264943]: [NOTICE]   (264947) : haproxy version is 2.8.14-c23fe91
Feb 02 11:40:36 compute-0 neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95[264943]: [NOTICE]   (264947) : path to executable is /usr/sbin/haproxy
Feb 02 11:40:36 compute-0 neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95[264943]: [WARNING]  (264947) : Exiting Master process...
Feb 02 11:40:36 compute-0 neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95[264943]: [ALERT]    (264947) : Current worker (264949) exited with code 143 (Terminated)
Feb 02 11:40:36 compute-0 neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95[264943]: [WARNING]  (264947) : All workers exited. Exiting... (0)
Feb 02 11:40:36 compute-0 systemd[1]: libpod-e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10.scope: Deactivated successfully.
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.453 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:40:36 compute-0 podman[265846]: 2026-02-02 11:40:36.456880675 +0000 UTC m=+0.053804802 container died e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.461 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.477 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.480 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.481 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10-userdata-shm.mount: Deactivated successfully.
Feb 02 11:40:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-565c07b3561e09f46255e9631bc3c45c8a977aeee43f3fe4690ccab23c9a582c-merged.mount: Deactivated successfully.
Feb 02 11:40:36 compute-0 podman[265846]: 2026-02-02 11:40:36.507846825 +0000 UTC m=+0.104770932 container cleanup e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:40:36 compute-0 systemd[1]: libpod-conmon-e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10.scope: Deactivated successfully.
Feb 02 11:40:36 compute-0 podman[265877]: 2026-02-02 11:40:36.569270515 +0000 UTC m=+0.042703264 container remove e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.574 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[135f7fee-cd0f-4b6d-8709-513c43c7d839]: (4, ('Mon Feb  2 11:40:36 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95 (e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10)\ne88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10\nMon Feb  2 11:40:36 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95 (e88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10)\ne88c59136c86b32fdc2641c250fde1e56371911bc114172bd3b5ca9940ed0f10\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.576 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb0fcec-a41e-4284-bcf7-5ebb72fd0831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.578 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped74a638-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.580 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:36 compute-0 kernel: taped74a638-b0: left promiscuous mode
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.582 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.585 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[fef012ed-bc43-46e4-b3e9-cf8b2753f35c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.587 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.606 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[922f1e34-1b79-4dd2-bc16-bb4d1b1f0af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.608 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[cd34ce2c-b09e-4127-9a99-aee915773f00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.620 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[4f263470-655b-4212-98c0-011adc4c1ea6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 404723, 'reachable_time': 39882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265892, 'error': None, 'target': 'ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.623 165875 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed74a638-b21d-4bea-a72e-473ea537cd95 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 11:40:36 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:36.623 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[6cae7ce1-d3b2-4ae3-921b-fbb168ae2633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:36 compute-0 systemd[1]: run-netns-ovnmeta\x2ded74a638\x2db21d\x2d4bea\x2da72e\x2d473ea537cd95.mount: Deactivated successfully.
Feb 02 11:40:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.765 251294 DEBUG nova.compute.manager [req-4f4e2310-597a-4ebf-b463-f9dcdadbae9b req-f264b49a-c30a-4635-94ad-a5d51164ea0f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-unplugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.766 251294 DEBUG oslo_concurrency.lockutils [req-4f4e2310-597a-4ebf-b463-f9dcdadbae9b req-f264b49a-c30a-4635-94ad-a5d51164ea0f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.766 251294 DEBUG oslo_concurrency.lockutils [req-4f4e2310-597a-4ebf-b463-f9dcdadbae9b req-f264b49a-c30a-4635-94ad-a5d51164ea0f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.767 251294 DEBUG oslo_concurrency.lockutils [req-4f4e2310-597a-4ebf-b463-f9dcdadbae9b req-f264b49a-c30a-4635-94ad-a5d51164ea0f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.767 251294 DEBUG nova.compute.manager [req-4f4e2310-597a-4ebf-b463-f9dcdadbae9b req-f264b49a-c30a-4635-94ad-a5d51164ea0f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] No waiting events found dispatching network-vif-unplugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:40:36 compute-0 nova_compute[251290]: 2026-02-02 11:40:36.767 251294 WARNING nova.compute.manager [req-4f4e2310-597a-4ebf-b463-f9dcdadbae9b req-f264b49a-c30a-4635-94ad-a5d51164ea0f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received unexpected event network-vif-unplugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 for instance with vm_state active and task_state None.
Feb 02 11:40:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Feb 02 11:40:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 27 KiB/s wr, 32 op/s
Feb 02 11:40:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:36] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Feb 02 11:40:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:36] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Feb 02 11:40:37 compute-0 nova_compute[251290]: 2026-02-02 11:40:37.115 251294 DEBUG oslo_concurrency.lockutils [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:40:37 compute-0 nova_compute[251290]: 2026-02-02 11:40:37.116 251294 DEBUG oslo_concurrency.lockutils [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquired lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:40:37 compute-0 nova_compute[251290]: 2026-02-02 11:40:37.116 251294 DEBUG nova.network.neutron [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 11:40:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:37.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:37 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1131418535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:37.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:37 compute-0 nova_compute[251290]: 2026-02-02 11:40:37.476 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:40:37 compute-0 nova_compute[251290]: 2026-02-02 11:40:37.476 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:40:37 compute-0 nova_compute[251290]: 2026-02-02 11:40:37.476 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:40:37 compute-0 nova_compute[251290]: 2026-02-02 11:40:37.477 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:40:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:37.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:40:38 compute-0 ceph-mon[74676]: pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 27 KiB/s wr, 32 op/s
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.849 251294 DEBUG nova.compute.manager [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-plugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.850 251294 DEBUG oslo_concurrency.lockutils [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.850 251294 DEBUG oslo_concurrency.lockutils [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.850 251294 DEBUG oslo_concurrency.lockutils [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.850 251294 DEBUG nova.compute.manager [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] No waiting events found dispatching network-vif-plugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.851 251294 WARNING nova.compute.manager [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received unexpected event network-vif-plugged-fb58aed2-3c97-4c85-8834-01bd422b3fd4 for instance with vm_state active and task_state None.
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.851 251294 DEBUG nova.compute.manager [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-deleted-fb58aed2-3c97-4c85-8834-01bd422b3fd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.851 251294 INFO nova.compute.manager [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Neutron deleted interface fb58aed2-3c97-4c85-8834-01bd422b3fd4; detaching it from the instance and deleting it from the info cache
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.851 251294 DEBUG nova.network.neutron [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:40:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:38.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.883 251294 DEBUG nova.objects.instance [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lazy-loading 'system_metadata' on Instance uuid ccf853f9-d90e-46b8-85a2-b47f8fc8585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.907 251294 DEBUG nova.objects.instance [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lazy-loading 'flavor' on Instance uuid ccf853f9-d90e-46b8-85a2-b47f8fc8585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.933 251294 DEBUG nova.virt.libvirt.vif [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:39:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1247938665',display_name='tempest-TestNetworkBasicOps-server-1247938665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1247938665',id=6,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqof0F9JyUHVT5hMQ79Cok6KP6MdurBGSPOHnHzHbYwvsbknGW+nDBswDzr2outcMRHfbDibxWIVbtW1Oyvmxl+ktSqoqRtSqEKI2u6qUhtldlE4mRfShljqjZtZT75TA==',key_name='tempest-TestNetworkBasicOps-1477889364',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-50sxw0dy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:39:21Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=ccf853f9-d90e-46b8-85a2-b47f8fc8585e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.934 251294 DEBUG nova.network.os_vif_util [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Converting VIF {"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.935 251294 DEBUG nova.network.os_vif_util [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:40:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 11 KiB/s wr, 29 op/s
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.940 251294 DEBUG nova.virt.libvirt.guest [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4a:43:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb58aed2-3c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.944 251294 DEBUG nova.virt.libvirt.guest [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4a:43:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb58aed2-3c"/></interface>not found in domain: <domain type='kvm' id='3'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <name>instance-00000006</name>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <uuid>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</uuid>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-1247938665</nova:name>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:40:36</nova:creationTime>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:port uuid="e235e7e6-e897-4b5c-80c9-036612ca0aa0">
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:40:38 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <memory unit='KiB'>131072</memory>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <vcpu placement='static'>1</vcpu>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <resource>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <partition>/machine</partition>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </resource>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <sysinfo type='smbios'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <system>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='manufacturer'>RDO</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='product'>OpenStack Compute</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='serial'>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='uuid'>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='family'>Virtual Machine</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </system>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <os>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <boot dev='hd'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <smbios mode='sysinfo'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </os>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <features>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <vmcoreinfo state='on'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </features>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <cpu mode='custom' match='exact' check='full'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <vendor>AMD</vendor>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='x2apic'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc-deadline'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='hypervisor'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc_adjust'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='spec-ctrl'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='stibp'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='ssbd'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='cmp_legacy'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='overflow-recov'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='succor'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='ibrs'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='amd-ssbd'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='virt-ssbd'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='lbrv'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='tsc-scale'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='vmcb-clean'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='flushbyasid'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='pause-filter'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='pfthreshold'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='xsaves'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='svm'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='topoext'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='npt'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='nrip-save'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <clock offset='utc'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <timer name='pit' tickpolicy='delay'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <timer name='hpet' present='no'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <on_poweroff>destroy</on_poweroff>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <on_reboot>restart</on_reboot>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <on_crash>destroy</on_crash>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <disk type='network' device='disk'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk' index='2'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       </source>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target dev='vda' bus='virtio'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='virtio-disk0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <disk type='network' device='cdrom'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk.config' index='1'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       </source>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target dev='sda' bus='sata'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <readonly/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='sata0-0-0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='0' model='pcie-root'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pcie.0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='1' port='0x10'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='2' port='0x11'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='3' port='0x12'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.3'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='4' port='0x13'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.4'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='5' port='0x14'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.5'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='6' port='0x15'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.6'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='7' port='0x16'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.7'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='8' port='0x17'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.8'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='9' port='0x18'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.9'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='10' port='0x19'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.10'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='11' port='0x1a'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.11'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='12' port='0x1b'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.12'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='13' port='0x1c'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.13'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='14' port='0x1d'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.14'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='15' port='0x1e'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.15'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='16' port='0x1f'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.16'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='17' port='0x20'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.17'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='18' port='0x21'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.18'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='19' port='0x22'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.19'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='20' port='0x23'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.20'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='21' port='0x24'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.21'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='22' port='0x25'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.22'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='23' port='0x26'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.23'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='24' port='0x27'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.24'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='25' port='0x28'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.25'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-pci-bridge'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.26'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='usb'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='sata' index='0'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='ide'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <interface type='ethernet'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <mac address='fa:16:3e:83:bd:9e'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target dev='tape235e7e6-e8'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model type='virtio'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <driver name='vhost' rx_queue_size='512'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <mtu size='1442'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='net0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <serial type='pty'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/console.log' append='off'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target type='isa-serial' port='0'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <model name='isa-serial'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       </target>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <console type='pty' tty='/dev/pts/0'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/console.log' append='off'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target type='serial' port='0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </console>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <input type='tablet' bus='usb'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='input0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='usb' bus='0' port='1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <input type='mouse' bus='ps2'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='input1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <input type='keyboard' bus='ps2'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='input2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <listen type='address' address='::0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <audio id='1' type='none'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <video>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model type='virtio' heads='1' primary='yes'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='video0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </video>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <watchdog model='itco' action='reset'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='watchdog0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </watchdog>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <memballoon model='virtio'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <stats period='10'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='balloon0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <rng model='virtio'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <backend model='random'>/dev/urandom</backend>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='rng0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <label>system_u:system_r:svirt_t:s0:c582,c1002</label>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c582,c1002</imagelabel>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <label>+107:+107</label>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <imagelabel>+107:+107</imagelabel>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:40:38 compute-0 nova_compute[251290]: </domain>
Feb 02 11:40:38 compute-0 nova_compute[251290]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.946 251294 DEBUG nova.virt.libvirt.guest [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4a:43:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb58aed2-3c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.950 251294 DEBUG nova.virt.libvirt.guest [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4a:43:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb58aed2-3c"/></interface>not found in domain: <domain type='kvm' id='3'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <name>instance-00000006</name>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <uuid>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</uuid>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-1247938665</nova:name>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:40:36</nova:creationTime>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:port uuid="e235e7e6-e897-4b5c-80c9-036612ca0aa0">
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:40:38 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <memory unit='KiB'>131072</memory>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <vcpu placement='static'>1</vcpu>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <resource>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <partition>/machine</partition>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </resource>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <sysinfo type='smbios'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <system>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='manufacturer'>RDO</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='product'>OpenStack Compute</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='serial'>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='uuid'>ccf853f9-d90e-46b8-85a2-b47f8fc8585e</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <entry name='family'>Virtual Machine</entry>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </system>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <os>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <boot dev='hd'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <smbios mode='sysinfo'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </os>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <features>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <vmcoreinfo state='on'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </features>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <cpu mode='custom' match='exact' check='full'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <vendor>AMD</vendor>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='x2apic'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc-deadline'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='hypervisor'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='tsc_adjust'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='spec-ctrl'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='stibp'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='ssbd'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='cmp_legacy'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='overflow-recov'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='succor'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='ibrs'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='amd-ssbd'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='virt-ssbd'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='lbrv'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='tsc-scale'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='vmcb-clean'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='flushbyasid'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='pause-filter'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='pfthreshold'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='xsaves'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='svm'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='require' name='topoext'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='npt'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <feature policy='disable' name='nrip-save'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <clock offset='utc'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <timer name='pit' tickpolicy='delay'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <timer name='hpet' present='no'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <on_poweroff>destroy</on_poweroff>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <on_reboot>restart</on_reboot>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <on_crash>destroy</on_crash>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <disk type='network' device='disk'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk' index='2'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       </source>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target dev='vda' bus='virtio'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='virtio-disk0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <disk type='network' device='cdrom'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <driver name='qemu' type='raw' cache='none'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <auth username='openstack'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <secret type='ceph' uuid='1d33f80b-d6ca-501c-bac7-184379b89279'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <source protocol='rbd' name='vms/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_disk.config' index='1'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.100' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.102' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <host name='192.168.122.101' port='6789'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       </source>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target dev='sda' bus='sata'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <readonly/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='sata0-0-0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='0' model='pcie-root'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pcie.0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='1' port='0x10'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='2' port='0x11'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='3' port='0x12'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.3'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='4' port='0x13'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.4'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='5' port='0x14'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.5'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='6' port='0x15'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.6'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='7' port='0x16'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.7'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='8' port='0x17'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.8'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='9' port='0x18'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.9'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='10' port='0x19'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.10'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='11' port='0x1a'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.11'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='12' port='0x1b'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.12'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='13' port='0x1c'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.13'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='14' port='0x1d'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.14'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='15' port='0x1e'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.15'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='16' port='0x1f'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.16'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='17' port='0x20'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.17'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='18' port='0x21'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.18'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='19' port='0x22'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.19'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='20' port='0x23'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.20'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='21' port='0x24'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.21'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='22' port='0x25'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.22'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='23' port='0x26'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.23'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='24' port='0x27'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.24'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-root-port'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target chassis='25' port='0x28'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.25'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model name='pcie-pci-bridge'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='pci.26'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='usb'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <controller type='sata' index='0'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='ide'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </controller>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <interface type='ethernet'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <mac address='fa:16:3e:83:bd:9e'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target dev='tape235e7e6-e8'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model type='virtio'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <driver name='vhost' rx_queue_size='512'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <mtu size='1442'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='net0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <serial type='pty'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/console.log' append='off'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target type='isa-serial' port='0'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:         <model name='isa-serial'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       </target>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <console type='pty' tty='/dev/pts/0'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <source path='/dev/pts/0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <log file='/var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e/console.log' append='off'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <target type='serial' port='0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='serial0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </console>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <input type='tablet' bus='usb'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='input0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='usb' bus='0' port='1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <input type='mouse' bus='ps2'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='input1'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <input type='keyboard' bus='ps2'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='input2'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </input>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <listen type='address' address='::0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </graphics>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <audio id='1' type='none'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <video>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <model type='virtio' heads='1' primary='yes'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='video0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </video>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <watchdog model='itco' action='reset'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='watchdog0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </watchdog>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <memballoon model='virtio'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <stats period='10'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='balloon0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <rng model='virtio'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <backend model='random'>/dev/urandom</backend>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <alias name='rng0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <label>system_u:system_r:svirt_t:s0:c582,c1002</label>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c582,c1002</imagelabel>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <label>+107:+107</label>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <imagelabel>+107:+107</imagelabel>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </seclabel>
Feb 02 11:40:38 compute-0 nova_compute[251290]: </domain>
Feb 02 11:40:38 compute-0 nova_compute[251290]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.950 251294 WARNING nova.virt.libvirt.driver [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Detaching interface fa:16:3e:4a:43:b5 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapfb58aed2-3c' not found.
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.951 251294 DEBUG nova.virt.libvirt.vif [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:39:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1247938665',display_name='tempest-TestNetworkBasicOps-server-1247938665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1247938665',id=6,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqof0F9JyUHVT5hMQ79Cok6KP6MdurBGSPOHnHzHbYwvsbknGW+nDBswDzr2outcMRHfbDibxWIVbtW1Oyvmxl+ktSqoqRtSqEKI2u6qUhtldlE4mRfShljqjZtZT75TA==',key_name='tempest-TestNetworkBasicOps-1477889364',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-50sxw0dy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:39:21Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=ccf853f9-d90e-46b8-85a2-b47f8fc8585e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.951 251294 DEBUG nova.network.os_vif_util [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Converting VIF {"id": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "address": "fa:16:3e:4a:43:b5", "network": {"id": "ed74a638-b21d-4bea-a72e-473ea537cd95", "bridge": "br-int", "label": "tempest-network-smoke--155869764", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb58aed2-3c", "ovs_interfaceid": "fb58aed2-3c97-4c85-8834-01bd422b3fd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.952 251294 DEBUG nova.network.os_vif_util [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.952 251294 DEBUG os_vif [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.954 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.954 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb58aed2-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.954 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.957 251294 INFO os_vif [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:43:b5,bridge_name='br-int',has_traffic_filtering=True,id=fb58aed2-3c97-4c85-8834-01bd422b3fd4,network=Network(ed74a638-b21d-4bea-a72e-473ea537cd95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb58aed2-3c')
Feb 02 11:40:38 compute-0 nova_compute[251290]: 2026-02-02 11:40:38.958 251294 DEBUG nova.virt.libvirt.guest [req-43e232f0-d1e2-431e-8e5c-4f81cc3d72a3 req-4e3468da-47e6-4ac9-a63d-8e63be5d3186 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:name>tempest-TestNetworkBasicOps-server-1247938665</nova:name>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:creationTime>2026-02-02 11:40:38</nova:creationTime>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:flavor name="m1.nano">
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:memory>128</nova:memory>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:disk>1</nova:disk>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:swap>0</nova:swap>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:vcpus>1</nova:vcpus>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </nova:flavor>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:owner>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </nova:owner>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   <nova:ports>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     <nova:port uuid="e235e7e6-e897-4b5c-80c9-036612ca0aa0">
Feb 02 11:40:38 compute-0 nova_compute[251290]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 11:40:38 compute-0 nova_compute[251290]:     </nova:port>
Feb 02 11:40:38 compute-0 nova_compute[251290]:   </nova:ports>
Feb 02 11:40:38 compute-0 nova_compute[251290]: </nova:instance>
Feb 02 11:40:38 compute-0 nova_compute[251290]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 02 11:40:39 compute-0 ovn_controller[154901]: 2026-02-02T11:40:39Z|00073|binding|INFO|Releasing lport 1b19eaf6-869f-4563-a74e-b4aff65ccdab from this chassis (sb_readonly=0)
Feb 02 11:40:39 compute-0 nova_compute[251290]: 2026-02-02 11:40:39.071 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:39 compute-0 nova_compute[251290]: 2026-02-02 11:40:39.114 251294 INFO nova.network.neutron [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Port fb58aed2-3c97-4c85-8834-01bd422b3fd4 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 02 11:40:39 compute-0 nova_compute[251290]: 2026-02-02 11:40:39.115 251294 DEBUG nova.network.neutron [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:40:39 compute-0 nova_compute[251290]: 2026-02-02 11:40:39.153 251294 DEBUG oslo_concurrency.lockutils [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Releasing lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:40:39 compute-0 nova_compute[251290]: 2026-02-02 11:40:39.194 251294 DEBUG oslo_concurrency.lockutils [None req-85b7b42e-ee4c-405d-8feb-0f50cae26cf1 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "interface-ccf853f9-d90e-46b8-85a2-b47f8fc8585e-fb58aed2-3c97-4c85-8834-01bd422b3fd4" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:39.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:39.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.007 251294 DEBUG oslo_concurrency.lockutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.007 251294 DEBUG oslo_concurrency.lockutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.008 251294 DEBUG oslo_concurrency.lockutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.008 251294 DEBUG oslo_concurrency.lockutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.008 251294 DEBUG oslo_concurrency.lockutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.009 251294 INFO nova.compute.manager [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Terminating instance
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.010 251294 DEBUG nova.compute.manager [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.036 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.036 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:40:40 compute-0 kernel: tape235e7e6-e8 (unregistering): left promiscuous mode
Feb 02 11:40:40 compute-0 NetworkManager[49067]: <info>  [1770032440.0784] device (tape235e7e6-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 11:40:40 compute-0 ovn_controller[154901]: 2026-02-02T11:40:40Z|00074|binding|INFO|Releasing lport e235e7e6-e897-4b5c-80c9-036612ca0aa0 from this chassis (sb_readonly=0)
Feb 02 11:40:40 compute-0 ovn_controller[154901]: 2026-02-02T11:40:40Z|00075|binding|INFO|Setting lport e235e7e6-e897-4b5c-80c9-036612ca0aa0 down in Southbound
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.084 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:40 compute-0 ovn_controller[154901]: 2026-02-02T11:40:40Z|00076|binding|INFO|Removing iface tape235e7e6-e8 ovn-installed in OVS
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.088 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.094 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:bd:9e 10.100.0.4'], port_security=['fa:16:3e:83:bd:9e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ccf853f9-d90e-46b8-85a2-b47f8fc8585e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06a83769-0f4f-4012-9307-4d0e81e87120', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e5471f9-85cc-4467-a88c-e46226a3955b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77029122-a2df-4506-bb27-dd42ac356ba6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=e235e7e6-e897-4b5c-80c9-036612ca0aa0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.095 165304 INFO neutron.agent.ovn.metadata.agent [-] Port e235e7e6-e897-4b5c-80c9-036612ca0aa0 in datapath 06a83769-0f4f-4012-9307-4d0e81e87120 unbound from our chassis
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.097 165304 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 06a83769-0f4f-4012-9307-4d0e81e87120, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.097 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.097 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[a162aaf6-4ed3-4924-af0e-9ccf06e1f509]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.098 165304 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120 namespace which is not needed anymore
Feb 02 11:40:40 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb 02 11:40:40 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 15.927s CPU time.
Feb 02 11:40:40 compute-0 systemd-machined[218018]: Machine qemu-3-instance-00000006 terminated.
Feb 02 11:40:40 compute-0 neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120[264066]: [NOTICE]   (264070) : haproxy version is 2.8.14-c23fe91
Feb 02 11:40:40 compute-0 neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120[264066]: [NOTICE]   (264070) : path to executable is /usr/sbin/haproxy
Feb 02 11:40:40 compute-0 neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120[264066]: [WARNING]  (264070) : Exiting Master process...
Feb 02 11:40:40 compute-0 neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120[264066]: [ALERT]    (264070) : Current worker (264072) exited with code 143 (Terminated)
Feb 02 11:40:40 compute-0 neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120[264066]: [WARNING]  (264070) : All workers exited. Exiting... (0)
Feb 02 11:40:40 compute-0 systemd[1]: libpod-e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b.scope: Deactivated successfully.
Feb 02 11:40:40 compute-0 podman[265921]: 2026-02-02 11:40:40.216192154 +0000 UTC m=+0.040353887 container died e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:40:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b-userdata-shm.mount: Deactivated successfully.
Feb 02 11:40:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a83a4aa2ba93104cdbd87356a948bf9c2e1fa99630ac9195c54764439212a589-merged.mount: Deactivated successfully.
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.248 251294 INFO nova.virt.libvirt.driver [-] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Instance destroyed successfully.
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.249 251294 DEBUG nova.objects.instance [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'resources' on Instance uuid ccf853f9-d90e-46b8-85a2-b47f8fc8585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:40:40 compute-0 podman[265921]: 2026-02-02 11:40:40.254711608 +0000 UTC m=+0.078873341 container cleanup e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.265 251294 DEBUG nova.virt.libvirt.vif [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:39:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1247938665',display_name='tempest-TestNetworkBasicOps-server-1247938665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1247938665',id=6,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqof0F9JyUHVT5hMQ79Cok6KP6MdurBGSPOHnHzHbYwvsbknGW+nDBswDzr2outcMRHfbDibxWIVbtW1Oyvmxl+ktSqoqRtSqEKI2u6qUhtldlE4mRfShljqjZtZT75TA==',key_name='tempest-TestNetworkBasicOps-1477889364',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-50sxw0dy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:39:21Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=ccf853f9-d90e-46b8-85a2-b47f8fc8585e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.265 251294 DEBUG nova.network.os_vif_util [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.266 251294 DEBUG nova.network.os_vif_util [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:bd:9e,bridge_name='br-int',has_traffic_filtering=True,id=e235e7e6-e897-4b5c-80c9-036612ca0aa0,network=Network(06a83769-0f4f-4012-9307-4d0e81e87120),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape235e7e6-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.268 251294 DEBUG os_vif [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:bd:9e,bridge_name='br-int',has_traffic_filtering=True,id=e235e7e6-e897-4b5c-80c9-036612ca0aa0,network=Network(06a83769-0f4f-4012-9307-4d0e81e87120),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape235e7e6-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.272 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.273 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape235e7e6-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.274 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.275 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.276 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.278 251294 INFO os_vif [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:bd:9e,bridge_name='br-int',has_traffic_filtering=True,id=e235e7e6-e897-4b5c-80c9-036612ca0aa0,network=Network(06a83769-0f4f-4012-9307-4d0e81e87120),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape235e7e6-e8')
Feb 02 11:40:40 compute-0 systemd[1]: libpod-conmon-e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b.scope: Deactivated successfully.
Feb 02 11:40:40 compute-0 podman[265962]: 2026-02-02 11:40:40.323693064 +0000 UTC m=+0.048183351 container remove e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.329 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[d07d3d8e-9d0b-4e90-b169-5d6fc49e9475]: (4, ('Mon Feb  2 11:40:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120 (e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b)\ne293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b\nMon Feb  2 11:40:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120 (e293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b)\ne293dcc89623468ac1214bdca933ff98f220d4d05e32b89f9e4e16174a570c7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.331 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[eb80c7c8-e6ea-4899-98ef-745884f12f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.333 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06a83769-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.335 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:40 compute-0 kernel: tap06a83769-00: left promiscuous mode
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.341 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.344 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[923dd01a-03dd-46fd-b855-25938c93a406]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.360 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[0e0f7685-b748-4f99-a04c-637ba99d1571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.362 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe57c4c-2ff2-4e29-a2ee-944612bcb7bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.378 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a2ff07-92f8-4d34-87c5-68afc0db1a31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401497, 'reachable_time': 24251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265994, 'error': None, 'target': 'ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d06a83769\x2d0f4f\x2d4012\x2d9307\x2d4d0e81e87120.mount: Deactivated successfully.
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.382 165875 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-06a83769-0f4f-4012-9307-4d0e81e87120 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 11:40:40 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:40.383 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[945ec6a8-9299-4ab5-9bef-8b4e285bf5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:40:40 compute-0 ceph-mon[74676]: pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 11 KiB/s wr, 29 op/s
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.610 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.766 251294 INFO nova.virt.libvirt.driver [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Deleting instance files /var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_del
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.767 251294 INFO nova.virt.libvirt.driver [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Deletion of /var/lib/nova/instances/ccf853f9-d90e-46b8-85a2-b47f8fc8585e_del complete
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.825 251294 INFO nova.compute.manager [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Took 0.81 seconds to destroy the instance on the hypervisor.
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.825 251294 DEBUG oslo.service.loopingcall [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.825 251294 DEBUG nova.compute.manager [-] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.826 251294 DEBUG nova.network.neutron [-] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 11:40:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 12 KiB/s wr, 51 op/s
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.945 251294 DEBUG nova.compute.manager [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-changed-e235e7e6-e897-4b5c-80c9-036612ca0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.946 251294 DEBUG nova.compute.manager [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Refreshing instance network info cache due to event network-changed-e235e7e6-e897-4b5c-80c9-036612ca0aa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.946 251294 DEBUG oslo_concurrency.lockutils [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.946 251294 DEBUG oslo_concurrency.lockutils [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:40:40 compute-0 nova_compute[251290]: 2026-02-02 11:40:40.946 251294 DEBUG nova.network.neutron [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Refreshing network info cache for port e235e7e6-e897-4b5c-80c9-036612ca0aa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:40:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:41.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:41.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:41 compute-0 nova_compute[251290]: 2026-02-02 11:40:41.793 251294 DEBUG nova.network.neutron [-] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:40:41 compute-0 nova_compute[251290]: 2026-02-02 11:40:41.812 251294 INFO nova.compute.manager [-] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Took 0.99 seconds to deallocate network for instance.
Feb 02 11:40:41 compute-0 nova_compute[251290]: 2026-02-02 11:40:41.873 251294 DEBUG oslo_concurrency.lockutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:41 compute-0 nova_compute[251290]: 2026-02-02 11:40:41.874 251294 DEBUG oslo_concurrency.lockutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:41 compute-0 nova_compute[251290]: 2026-02-02 11:40:41.942 251294 DEBUG oslo_concurrency.processutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:40:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:40:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762174614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.393 251294 DEBUG nova.network.neutron [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updated VIF entry in instance network info cache for port e235e7e6-e897-4b5c-80c9-036612ca0aa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.394 251294 DEBUG nova.network.neutron [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Updating instance_info_cache with network_info: [{"id": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "address": "fa:16:3e:83:bd:9e", "network": {"id": "06a83769-0f4f-4012-9307-4d0e81e87120", "bridge": "br-int", "label": "tempest-network-smoke--626207079", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape235e7e6-e8", "ovs_interfaceid": "e235e7e6-e897-4b5c-80c9-036612ca0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.411 251294 DEBUG oslo_concurrency.processutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:40:42 compute-0 ceph-mon[74676]: pgmap v869: 353 pgs: 353 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 12 KiB/s wr, 51 op/s
Feb 02 11:40:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3762174614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.420 251294 DEBUG nova.compute.provider_tree [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.449 251294 DEBUG nova.scheduler.client.report [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.455 251294 DEBUG oslo_concurrency.lockutils [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-ccf853f9-d90e-46b8-85a2-b47f8fc8585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.456 251294 DEBUG nova.compute.manager [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-unplugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.456 251294 DEBUG oslo_concurrency.lockutils [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.456 251294 DEBUG oslo_concurrency.lockutils [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.457 251294 DEBUG oslo_concurrency.lockutils [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.457 251294 DEBUG nova.compute.manager [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] No waiting events found dispatching network-vif-unplugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.457 251294 DEBUG nova.compute.manager [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-unplugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.457 251294 DEBUG nova.compute.manager [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-plugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.457 251294 DEBUG oslo_concurrency.lockutils [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.458 251294 DEBUG oslo_concurrency.lockutils [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.458 251294 DEBUG oslo_concurrency.lockutils [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.458 251294 DEBUG nova.compute.manager [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] No waiting events found dispatching network-vif-plugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.458 251294 WARNING nova.compute.manager [req-2e7b9f6b-1fc2-4fc0-9913-38397661ec93 req-d4454312-6890-41a5-9b4c-e48d898b20ed 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received unexpected event network-vif-plugged-e235e7e6-e897-4b5c-80c9-036612ca0aa0 for instance with vm_state active and task_state deleting.
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.473 251294 DEBUG oslo_concurrency.lockutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.497 251294 INFO nova.scheduler.client.report [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Deleted allocations for instance ccf853f9-d90e-46b8-85a2-b47f8fc8585e
Feb 02 11:40:42 compute-0 nova_compute[251290]: 2026-02-02 11:40:42.562 251294 DEBUG oslo_concurrency.lockutils [None req-34bb5d50-ea78-4995-94b8-b26a653f1093 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "ccf853f9-d90e-46b8-85a2-b47f8fc8585e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:40:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 12 KiB/s wr, 51 op/s
Feb 02 11:40:43 compute-0 nova_compute[251290]: 2026-02-02 11:40:43.025 251294 DEBUG nova.compute.manager [req-04c4be35-6959-4c61-85da-521ee059afe9 req-b708b42e-0c87-490c-a962-cee55304e972 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Received event network-vif-deleted-e235e7e6-e897-4b5c-80c9-036612ca0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:40:43 compute-0 nova_compute[251290]: 2026-02-02 11:40:43.025 251294 INFO nova.compute.manager [req-04c4be35-6959-4c61-85da-521ee059afe9 req-b708b42e-0c87-490c-a962-cee55304e972 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Neutron deleted interface e235e7e6-e897-4b5c-80c9-036612ca0aa0; detaching it from the instance and deleting it from the info cache
Feb 02 11:40:43 compute-0 nova_compute[251290]: 2026-02-02 11:40:43.026 251294 DEBUG nova.network.neutron [req-04c4be35-6959-4c61-85da-521ee059afe9 req-b708b42e-0c87-490c-a962-cee55304e972 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 02 11:40:43 compute-0 nova_compute[251290]: 2026-02-02 11:40:43.028 251294 DEBUG nova.compute.manager [req-04c4be35-6959-4c61-85da-521ee059afe9 req-b708b42e-0c87-490c-a962-cee55304e972 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Detach interface failed, port_id=e235e7e6-e897-4b5c-80c9-036612ca0aa0, reason: Instance ccf853f9-d90e-46b8-85a2-b47f8fc8585e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 02 11:40:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:43.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:43.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:40:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1680465879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:40:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:40:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1680465879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:40:44 compute-0 ceph-mon[74676]: pgmap v870: 353 pgs: 353 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 12 KiB/s wr, 51 op/s
Feb 02 11:40:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1680465879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:40:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1680465879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:40:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:40:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:40:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 12 KiB/s wr, 51 op/s
Feb 02 11:40:45 compute-0 nova_compute[251290]: 2026-02-02 11:40:45.275 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:45.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:40:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:45.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:45 compute-0 nova_compute[251290]: 2026-02-02 11:40:45.611 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:46 compute-0 podman[266024]: 2026-02-02 11:40:46.270718712 +0000 UTC m=+0.057503558 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 02 11:40:46 compute-0 podman[266025]: 2026-02-02 11:40:46.299999241 +0000 UTC m=+0.084637826 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:40:46 compute-0 ceph-mon[74676]: pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 12 KiB/s wr, 51 op/s
Feb 02 11:40:46 compute-0 sudo[266070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:40:46 compute-0 sudo[266070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:40:46 compute-0 sudo[266070]: pam_unix(sudo:session): session closed for user root
Feb 02 11:40:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 58 op/s
Feb 02 11:40:46 compute-0 nova_compute[251290]: 2026-02-02 11:40:46.947 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:46 compute-0 nova_compute[251290]: 2026-02-02 11:40:46.974 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:46] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Feb 02 11:40:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:46] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Feb 02 11:40:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:47.154Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:47.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:40:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:47.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:40:48 compute-0 ceph-mon[74676]: pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 58 op/s
Feb 02 11:40:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:48.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:40:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:48.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:40:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 11:40:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:49.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:50 compute-0 nova_compute[251290]: 2026-02-02 11:40:50.279 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:50 compute-0 ceph-mon[74676]: pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 11:40:50 compute-0 nova_compute[251290]: 2026-02-02 11:40:50.613 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Feb 02 11:40:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:51.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:51.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:52 compute-0 ceph-mon[74676]: pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Feb 02 11:40:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 596 B/s wr, 7 op/s
Feb 02 11:40:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:53.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:54 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:54.284 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:40:54 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:54.285 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:40:54 compute-0 nova_compute[251290]: 2026-02-02 11:40:54.285 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:54 compute-0 ceph-mon[74676]: pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 596 B/s wr, 7 op/s
Feb 02 11:40:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 596 B/s wr, 7 op/s
Feb 02 11:40:55 compute-0 nova_compute[251290]: 2026-02-02 11:40:55.244 251294 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770032440.2434952, ccf853f9-d90e-46b8-85a2-b47f8fc8585e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:40:55 compute-0 nova_compute[251290]: 2026-02-02 11:40:55.245 251294 INFO nova.compute.manager [-] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] VM Stopped (Lifecycle Event)
Feb 02 11:40:55 compute-0 nova_compute[251290]: 2026-02-02 11:40:55.269 251294 DEBUG nova.compute.manager [None req-6a48a765-efb7-4152-9c53-2ffa1dd93891 - - - - - -] [instance: ccf853f9-d90e-46b8-85a2-b47f8fc8585e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:40:55 compute-0 nova_compute[251290]: 2026-02-02 11:40:55.280 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:55.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:55.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:55 compute-0 nova_compute[251290]: 2026-02-02 11:40:55.614 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:40:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:40:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:40:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:40:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:40:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:40:56 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:40:56.288 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:40:56 compute-0 ceph-mon[74676]: pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 596 B/s wr, 7 op/s
Feb 02 11:40:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:40:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 596 B/s wr, 7 op/s
Feb 02 11:40:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:56] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Feb 02 11:40:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:40:56] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Feb 02 11:40:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:57.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:57.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:57.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:58 compute-0 ceph-mon[74676]: pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 596 B/s wr, 7 op/s
Feb 02 11:40:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:58.862Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:40:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:40:58.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:40:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:40:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:40:59.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:40:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:40:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:40:59.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:40:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:40:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:40:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:40:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:40:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:40:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:40:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:40:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:41:00 compute-0 nova_compute[251290]: 2026-02-02 11:41:00.282 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:00 compute-0 ceph-mon[74676]: pgmap v878: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:41:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:41:00 compute-0 nova_compute[251290]: 2026-02-02 11:41:00.616 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:41:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:01.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:41:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:01.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:41:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:02 compute-0 ceph-mon[74676]: pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:41:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:41:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:03.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:03.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:03 compute-0 ceph-mon[74676]: pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:41:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:41:05 compute-0 nova_compute[251290]: 2026-02-02 11:41:05.284 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:05.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:05.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:05 compute-0 nova_compute[251290]: 2026-02-02 11:41:05.618 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:06 compute-0 ceph-mon[74676]: pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:41:06 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2846575430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:06 compute-0 sudo[266116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:41:06 compute-0 sudo[266116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:06 compute-0 sudo[266116]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:41:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:06] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:41:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:06] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:41:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:07.156Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:41:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:07.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:07.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:08 compute-0 ceph-mon[74676]: pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:41:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:08.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:41:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:41:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:09.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:09.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:10 compute-0 ceph-mon[74676]: pgmap v883: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:41:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1684696551' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:41:10 compute-0 nova_compute[251290]: 2026-02-02 11:41:10.286 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:10 compute-0 nova_compute[251290]: 2026-02-02 11:41:10.647 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:41:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:11 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3078654626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:41:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:41:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:11.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:41:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:41:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:11.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:41:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:12 compute-0 ceph-mon[74676]: pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:41:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:41:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:13.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:13.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:14 compute-0 ceph-mon[74676]: pgmap v885: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:41:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:41:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:41:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:41:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:41:15 compute-0 nova_compute[251290]: 2026-02-02 11:41:15.288 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:15.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:41:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:15.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:41:15 compute-0 nova_compute[251290]: 2026-02-02 11:41:15.650 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:16 compute-0 ceph-mon[74676]: pgmap v886: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:41:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 41 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Feb 02 11:41:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:16] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Feb 02 11:41:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:16] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Feb 02 11:41:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:17.157Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:41:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:17.158Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:41:17 compute-0 podman[266152]: 2026-02-02 11:41:17.272994997 +0000 UTC m=+0.058340282 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 02 11:41:17 compute-0 podman[266153]: 2026-02-02 11:41:17.334734816 +0000 UTC m=+0.119561766 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:41:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:41:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:17.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:41:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:17.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:18 compute-0 ceph-mon[74676]: pgmap v887: 353 pgs: 353 active+clean; 41 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Feb 02 11:41:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:18.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:41:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Feb 02 11:41:19 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1159385221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:41:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:19.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:41:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:19.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:20 compute-0 ceph-mon[74676]: pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Feb 02 11:41:20 compute-0 nova_compute[251290]: 2026-02-02 11:41:20.291 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:20 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 02 11:41:20 compute-0 nova_compute[251290]: 2026-02-02 11:41:20.652 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 02 11:41:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:21.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:21.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:41:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8880 writes, 34K keys, 8880 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 8880 writes, 2196 syncs, 4.04 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2012 writes, 6756 keys, 2012 commit groups, 1.0 writes per commit group, ingest: 6.16 MB, 0.01 MB/s
                                           Interval WAL: 2012 writes, 856 syncs, 2.35 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 11:41:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:22 compute-0 ceph-mon[74676]: pgmap v889: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 02 11:41:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:41:22.678 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:41:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:41:22.678 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:41:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:41:22.679 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:41:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:23.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:23.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:23 compute-0 ovn_controller[154901]: 2026-02-02T11:41:23Z|00077|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb 02 11:41:23 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:41:23 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:41:24 compute-0 ceph-mon[74676]: pgmap v890: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:25 compute-0 nova_compute[251290]: 2026-02-02 11:41:25.293 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:25.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:25.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:25 compute-0 nova_compute[251290]: 2026-02-02 11:41:25.654 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:26 compute-0 ceph-mon[74676]: pgmap v891: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:26 compute-0 sudo[266209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:41:26 compute-0 sudo[266209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:26 compute-0 sudo[266209]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:26] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Feb 02 11:41:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:26] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Feb 02 11:41:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:27.158Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:41:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:27.158Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:41:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:27.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:41:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:41:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:27.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:41:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:27.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:28 compute-0 ceph-mon[74676]: pgmap v892: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:28.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:41:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:28.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:41:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:28.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:41:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 597 B/s wr, 6 op/s
Feb 02 11:41:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:29.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:41:29
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['default.rgw.control', 'images', '.nfs', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'backups']
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:41:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:29.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:41:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:41:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:41:30 compute-0 nova_compute[251290]: 2026-02-02 11:41:30.295 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:30 compute-0 ceph-mon[74676]: pgmap v893: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 597 B/s wr, 6 op/s
Feb 02 11:41:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:41:30 compute-0 nova_compute[251290]: 2026-02-02 11:41:30.656 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 597 B/s wr, 6 op/s
Feb 02 11:41:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3937392353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:31.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:31.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:31 compute-0 sudo[266241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:41:31 compute-0 sudo[266241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:31 compute-0 sudo[266241]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:31 compute-0 sudo[266266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:41:31 compute-0 sudo[266266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:32 compute-0 sudo[266266]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:32 compute-0 ceph-mon[74676]: pgmap v894: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 597 B/s wr, 6 op/s
Feb 02 11:41:32 compute-0 sudo[266324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:41:32 compute-0 sudo[266324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:32 compute-0 sudo[266324]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:32 compute-0 sudo[266349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- inventory --format=json-pretty --filter-for-batch
Feb 02 11:41:32 compute-0 sudo[266349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:32 compute-0 podman[266413]: 2026-02-02 11:41:32.923909567 +0000 UTC m=+0.048306275 container create e32f1d225f16245075dc920a1849d55bfae623af2d2955a3dadd5513d2ea1759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lederberg, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:41:32 compute-0 systemd[1]: Started libpod-conmon-e32f1d225f16245075dc920a1849d55bfae623af2d2955a3dadd5513d2ea1759.scope.
Feb 02 11:41:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:41:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:41:32 compute-0 podman[266413]: 2026-02-02 11:41:32.90307158 +0000 UTC m=+0.027468288 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:41:33 compute-0 podman[266413]: 2026-02-02 11:41:33.016947363 +0000 UTC m=+0.141344081 container init e32f1d225f16245075dc920a1849d55bfae623af2d2955a3dadd5513d2ea1759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:41:33 compute-0 nova_compute[251290]: 2026-02-02 11:41:33.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:41:33 compute-0 podman[266413]: 2026-02-02 11:41:33.028603977 +0000 UTC m=+0.153000685 container start e32f1d225f16245075dc920a1849d55bfae623af2d2955a3dadd5513d2ea1759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lederberg, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb 02 11:41:33 compute-0 vibrant_lederberg[266430]: 167 167
Feb 02 11:41:33 compute-0 systemd[1]: libpod-e32f1d225f16245075dc920a1849d55bfae623af2d2955a3dadd5513d2ea1759.scope: Deactivated successfully.
Feb 02 11:41:33 compute-0 podman[266413]: 2026-02-02 11:41:33.041023223 +0000 UTC m=+0.165419931 container attach e32f1d225f16245075dc920a1849d55bfae623af2d2955a3dadd5513d2ea1759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lederberg, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:41:33 compute-0 podman[266413]: 2026-02-02 11:41:33.042296199 +0000 UTC m=+0.166692907 container died e32f1d225f16245075dc920a1849d55bfae623af2d2955a3dadd5513d2ea1759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbc1f60b37f78538c300ce95779343941512a892c3e19c1e5ae1551f2a17778d-merged.mount: Deactivated successfully.
Feb 02 11:41:33 compute-0 podman[266413]: 2026-02-02 11:41:33.118907664 +0000 UTC m=+0.243304372 container remove e32f1d225f16245075dc920a1849d55bfae623af2d2955a3dadd5513d2ea1759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:41:33 compute-0 systemd[1]: libpod-conmon-e32f1d225f16245075dc920a1849d55bfae623af2d2955a3dadd5513d2ea1759.scope: Deactivated successfully.
Feb 02 11:41:33 compute-0 podman[266455]: 2026-02-02 11:41:33.27199546 +0000 UTC m=+0.048406818 container create 1f83a3c8af6edc22ed213469a91fcf02ee70805e29f41f185f37fbe03ffbbeb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 11:41:33 compute-0 systemd[1]: Started libpod-conmon-1f83a3c8af6edc22ed213469a91fcf02ee70805e29f41f185f37fbe03ffbbeb2.scope.
Feb 02 11:41:33 compute-0 podman[266455]: 2026-02-02 11:41:33.250531615 +0000 UTC m=+0.026943003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:41:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2f872f1bc271c232b067e856f2e9397a8b86aea6e35d2d6277dbba226f4471/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2f872f1bc271c232b067e856f2e9397a8b86aea6e35d2d6277dbba226f4471/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2f872f1bc271c232b067e856f2e9397a8b86aea6e35d2d6277dbba226f4471/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2f872f1bc271c232b067e856f2e9397a8b86aea6e35d2d6277dbba226f4471/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:33 compute-0 podman[266455]: 2026-02-02 11:41:33.41230655 +0000 UTC m=+0.188717938 container init 1f83a3c8af6edc22ed213469a91fcf02ee70805e29f41f185f37fbe03ffbbeb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 11:41:33 compute-0 podman[266455]: 2026-02-02 11:41:33.421283078 +0000 UTC m=+0.197694446 container start 1f83a3c8af6edc22ed213469a91fcf02ee70805e29f41f185f37fbe03ffbbeb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:41:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3861749210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1105277424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2612476086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:33 compute-0 podman[266455]: 2026-02-02 11:41:33.433311722 +0000 UTC m=+0.209723110 container attach 1f83a3c8af6edc22ed213469a91fcf02ee70805e29f41f185f37fbe03ffbbeb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:41:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000057s ======
Feb 02 11:41:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:33.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Feb 02 11:41:34 compute-0 nova_compute[251290]: 2026-02-02 11:41:34.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:41:34 compute-0 nova_compute[251290]: 2026-02-02 11:41:34.021 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:41:34 compute-0 nice_sammet[266471]: [
Feb 02 11:41:34 compute-0 nice_sammet[266471]:     {
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         "available": false,
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         "being_replaced": false,
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         "ceph_device_lvm": false,
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         "lsm_data": {},
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         "lvs": [],
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         "path": "/dev/sr0",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         "rejected_reasons": [
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "Insufficient space (<5GB)",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "Has a FileSystem"
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         ],
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         "sys_api": {
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "actuators": null,
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "device_nodes": [
Feb 02 11:41:34 compute-0 nice_sammet[266471]:                 "sr0"
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             ],
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "devname": "sr0",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "human_readable_size": "482.00 KB",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "id_bus": "ata",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "model": "QEMU DVD-ROM",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "nr_requests": "2",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "parent": "/dev/sr0",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "partitions": {},
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "path": "/dev/sr0",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "removable": "1",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "rev": "2.5+",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "ro": "0",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "rotational": "1",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "sas_address": "",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "sas_device_handle": "",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "scheduler_mode": "mq-deadline",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "sectors": 0,
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "sectorsize": "2048",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "size": 493568.0,
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "support_discard": "2048",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "type": "disk",
Feb 02 11:41:34 compute-0 nice_sammet[266471]:             "vendor": "QEMU"
Feb 02 11:41:34 compute-0 nice_sammet[266471]:         }
Feb 02 11:41:34 compute-0 nice_sammet[266471]:     }
Feb 02 11:41:34 compute-0 nice_sammet[266471]: ]
Feb 02 11:41:34 compute-0 systemd[1]: libpod-1f83a3c8af6edc22ed213469a91fcf02ee70805e29f41f185f37fbe03ffbbeb2.scope: Deactivated successfully.
Feb 02 11:41:34 compute-0 podman[266455]: 2026-02-02 11:41:34.172168931 +0000 UTC m=+0.948580299 container died 1f83a3c8af6edc22ed213469a91fcf02ee70805e29f41f185f37fbe03ffbbeb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec2f872f1bc271c232b067e856f2e9397a8b86aea6e35d2d6277dbba226f4471-merged.mount: Deactivated successfully.
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 podman[266455]: 2026-02-02 11:41:34.242663731 +0000 UTC m=+1.019075099 container remove 1f83a3c8af6edc22ed213469a91fcf02ee70805e29f41f185f37fbe03ffbbeb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:41:34 compute-0 systemd[1]: libpod-conmon-1f83a3c8af6edc22ed213469a91fcf02ee70805e29f41f185f37fbe03ffbbeb2.scope: Deactivated successfully.
Feb 02 11:41:34 compute-0 sudo[266349]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:41:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:41:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:41:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:41:34 compute-0 ceph-mon[74676]: pgmap v895: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2555232679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:34 compute-0 sudo[267648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:41:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:41:34 compute-0 sudo[267648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:34 compute-0 sudo[267648]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:34 compute-0 sudo[267673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:41:34 compute-0 sudo[267673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:34 compute-0 podman[267738]: 2026-02-02 11:41:34.920613095 +0000 UTC m=+0.037101164 container create 28db58c98df01b4c1d777dab5793a2a1d694e640d30838f1231906f047eae6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:41:34 compute-0 systemd[1]: Started libpod-conmon-28db58c98df01b4c1d777dab5793a2a1d694e640d30838f1231906f047eae6bd.scope.
Feb 02 11:41:34 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:41:34 compute-0 podman[267738]: 2026-02-02 11:41:34.99895717 +0000 UTC m=+0.115445259 container init 28db58c98df01b4c1d777dab5793a2a1d694e640d30838f1231906f047eae6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:41:35 compute-0 podman[267738]: 2026-02-02 11:41:34.904755411 +0000 UTC m=+0.021243500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:41:35 compute-0 podman[267738]: 2026-02-02 11:41:35.005221169 +0000 UTC m=+0.121709238 container start 28db58c98df01b4c1d777dab5793a2a1d694e640d30838f1231906f047eae6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:41:35 compute-0 zealous_brown[267754]: 167 167
Feb 02 11:41:35 compute-0 systemd[1]: libpod-28db58c98df01b4c1d777dab5793a2a1d694e640d30838f1231906f047eae6bd.scope: Deactivated successfully.
Feb 02 11:41:35 compute-0 podman[267738]: 2026-02-02 11:41:35.011377846 +0000 UTC m=+0.127865915 container attach 28db58c98df01b4c1d777dab5793a2a1d694e640d30838f1231906f047eae6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:41:35 compute-0 podman[267738]: 2026-02-02 11:41:35.01187292 +0000 UTC m=+0.128360989 container died 28db58c98df01b4c1d777dab5793a2a1d694e640d30838f1231906f047eae6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8596647c6a6b6dbf661dde2b2916b6ba2941fc669effc365d91210a1f7fdb883-merged.mount: Deactivated successfully.
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.043 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.045 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.045 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.045 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.045 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:41:35 compute-0 podman[267738]: 2026-02-02 11:41:35.060626617 +0000 UTC m=+0.177114686 container remove 28db58c98df01b4c1d777dab5793a2a1d694e640d30838f1231906f047eae6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:41:35 compute-0 systemd[1]: libpod-conmon-28db58c98df01b4c1d777dab5793a2a1d694e640d30838f1231906f047eae6bd.scope: Deactivated successfully.
Feb 02 11:41:35 compute-0 podman[267779]: 2026-02-02 11:41:35.194505373 +0000 UTC m=+0.048708617 container create 66b223c174bf99588a35c017cf773a9aa1a35d390d0a9ece077d25b688516454 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_khorana, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:41:35 compute-0 systemd[1]: Started libpod-conmon-66b223c174bf99588a35c017cf773a9aa1a35d390d0a9ece077d25b688516454.scope.
Feb 02 11:41:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61d0e52131efc531ae3577f214f6895e72a890f3c153d160af7ccf4e135c2ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61d0e52131efc531ae3577f214f6895e72a890f3c153d160af7ccf4e135c2ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61d0e52131efc531ae3577f214f6895e72a890f3c153d160af7ccf4e135c2ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:35 compute-0 podman[267779]: 2026-02-02 11:41:35.172657097 +0000 UTC m=+0.026860361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61d0e52131efc531ae3577f214f6895e72a890f3c153d160af7ccf4e135c2ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61d0e52131efc531ae3577f214f6895e72a890f3c153d160af7ccf4e135c2ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:35 compute-0 podman[267779]: 2026-02-02 11:41:35.286817107 +0000 UTC m=+0.141020351 container init 66b223c174bf99588a35c017cf773a9aa1a35d390d0a9ece077d25b688516454 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:41:35 compute-0 podman[267779]: 2026-02-02 11:41:35.294886549 +0000 UTC m=+0.149089793 container start 66b223c174bf99588a35c017cf773a9aa1a35d390d0a9ece077d25b688516454 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_khorana, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 11:41:35 compute-0 podman[267779]: 2026-02-02 11:41:35.309348303 +0000 UTC m=+0.163551547 container attach 66b223c174bf99588a35c017cf773a9aa1a35d390d0a9ece077d25b688516454 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.312 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:35 compute-0 ceph-mon[74676]: pgmap v896: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s
Feb 02 11:41:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:41:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236845744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.519 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:41:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:35.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:35 compute-0 bold_khorana[267815]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:41:35 compute-0 bold_khorana[267815]: --> All data devices are unavailable
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.659 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:35 compute-0 podman[267779]: 2026-02-02 11:41:35.667150094 +0000 UTC m=+0.521353338 container died 66b223c174bf99588a35c017cf773a9aa1a35d390d0a9ece077d25b688516454 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_khorana, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:41:35 compute-0 systemd[1]: libpod-66b223c174bf99588a35c017cf773a9aa1a35d390d0a9ece077d25b688516454.scope: Deactivated successfully.
Feb 02 11:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b61d0e52131efc531ae3577f214f6895e72a890f3c153d160af7ccf4e135c2ba-merged.mount: Deactivated successfully.
Feb 02 11:41:35 compute-0 podman[267779]: 2026-02-02 11:41:35.72737441 +0000 UTC m=+0.581577654 container remove 66b223c174bf99588a35c017cf773a9aa1a35d390d0a9ece077d25b688516454 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_khorana, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:41:35 compute-0 systemd[1]: libpod-conmon-66b223c174bf99588a35c017cf773a9aa1a35d390d0a9ece077d25b688516454.scope: Deactivated successfully.
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.752 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.754 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4518MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.754 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.755 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:41:35 compute-0 sudo[267673]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.840 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.840 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:41:35 compute-0 sudo[267849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:41:35 compute-0 nova_compute[251290]: 2026-02-02 11:41:35.859 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:41:35 compute-0 sudo[267849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:35 compute-0 sudo[267849]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:35 compute-0 sudo[267875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:41:35 compute-0 sudo[267875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:36 compute-0 podman[267960]: 2026-02-02 11:41:36.294289242 +0000 UTC m=+0.038940807 container create afb8b3903cf3625746d4fadff4b1a5d9c498ce65508ff867d5d2fadb2b253bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:41:36 compute-0 systemd[1]: Started libpod-conmon-afb8b3903cf3625746d4fadff4b1a5d9c498ce65508ff867d5d2fadb2b253bc1.scope.
Feb 02 11:41:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:41:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121989317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.9 MiB/s wr, 30 op/s
Feb 02 11:41:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:41:36 compute-0 nova_compute[251290]: 2026-02-02 11:41:36.362 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:41:36 compute-0 nova_compute[251290]: 2026-02-02 11:41:36.370 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:41:36 compute-0 podman[267960]: 2026-02-02 11:41:36.279940491 +0000 UTC m=+0.024592086 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:41:36 compute-0 podman[267960]: 2026-02-02 11:41:36.391112986 +0000 UTC m=+0.135764581 container init afb8b3903cf3625746d4fadff4b1a5d9c498ce65508ff867d5d2fadb2b253bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_easley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:41:36 compute-0 podman[267960]: 2026-02-02 11:41:36.397095427 +0000 UTC m=+0.141746992 container start afb8b3903cf3625746d4fadff4b1a5d9c498ce65508ff867d5d2fadb2b253bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_easley, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb 02 11:41:36 compute-0 nova_compute[251290]: 2026-02-02 11:41:36.399 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:41:36 compute-0 heuristic_easley[267976]: 167 167
Feb 02 11:41:36 compute-0 systemd[1]: libpod-afb8b3903cf3625746d4fadff4b1a5d9c498ce65508ff867d5d2fadb2b253bc1.scope: Deactivated successfully.
Feb 02 11:41:36 compute-0 conmon[267976]: conmon afb8b3903cf3625746d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-afb8b3903cf3625746d4fadff4b1a5d9c498ce65508ff867d5d2fadb2b253bc1.scope/container/memory.events
Feb 02 11:41:36 compute-0 podman[267960]: 2026-02-02 11:41:36.404817648 +0000 UTC m=+0.149469233 container attach afb8b3903cf3625746d4fadff4b1a5d9c498ce65508ff867d5d2fadb2b253bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:41:36 compute-0 podman[267960]: 2026-02-02 11:41:36.406224059 +0000 UTC m=+0.150875654 container died afb8b3903cf3625746d4fadff4b1a5d9c498ce65508ff867d5d2fadb2b253bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_easley, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:41:36 compute-0 nova_compute[251290]: 2026-02-02 11:41:36.442 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:41:36 compute-0 nova_compute[251290]: 2026-02-02 11:41:36.443 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-37698bd4d3c16c71c1add26bddfd5402fa69e6482527d6bc79bff48b446cf347-merged.mount: Deactivated successfully.
Feb 02 11:41:36 compute-0 podman[267960]: 2026-02-02 11:41:36.4917634 +0000 UTC m=+0.236414965 container remove afb8b3903cf3625746d4fadff4b1a5d9c498ce65508ff867d5d2fadb2b253bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_easley, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb 02 11:41:36 compute-0 systemd[1]: libpod-conmon-afb8b3903cf3625746d4fadff4b1a5d9c498ce65508ff867d5d2fadb2b253bc1.scope: Deactivated successfully.
Feb 02 11:41:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3236845744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2934009108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:41:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3121989317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:36 compute-0 podman[268002]: 2026-02-02 11:41:36.629376732 +0000 UTC m=+0.042500738 container create 392d7d8a92117de3f66fa96d99ab05d980e449cece8d38448424c45d9319b31d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_franklin, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:41:36 compute-0 systemd[1]: Started libpod-conmon-392d7d8a92117de3f66fa96d99ab05d980e449cece8d38448424c45d9319b31d.scope.
Feb 02 11:41:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fce321d64d2033b185ec4dabcdeb3b5f14f757e535b9865c76b4146e361e8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:36 compute-0 podman[268002]: 2026-02-02 11:41:36.612747466 +0000 UTC m=+0.025871472 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fce321d64d2033b185ec4dabcdeb3b5f14f757e535b9865c76b4146e361e8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fce321d64d2033b185ec4dabcdeb3b5f14f757e535b9865c76b4146e361e8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fce321d64d2033b185ec4dabcdeb3b5f14f757e535b9865c76b4146e361e8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:36 compute-0 podman[268002]: 2026-02-02 11:41:36.724275231 +0000 UTC m=+0.137399257 container init 392d7d8a92117de3f66fa96d99ab05d980e449cece8d38448424c45d9319b31d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:41:36 compute-0 podman[268002]: 2026-02-02 11:41:36.733331631 +0000 UTC m=+0.146455637 container start 392d7d8a92117de3f66fa96d99ab05d980e449cece8d38448424c45d9319b31d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_franklin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:41:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:36 compute-0 podman[268002]: 2026-02-02 11:41:36.737291584 +0000 UTC m=+0.150415810 container attach 392d7d8a92117de3f66fa96d99ab05d980e449cece8d38448424c45d9319b31d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:41:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:36] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb 02 11:41:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:36] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb 02 11:41:37 compute-0 zealous_franklin[268019]: {
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:     "1": [
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:         {
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "devices": [
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "/dev/loop3"
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             ],
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "lv_name": "ceph_lv0",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "lv_size": "21470642176",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "name": "ceph_lv0",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "tags": {
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.cluster_name": "ceph",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.crush_device_class": "",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.encrypted": "0",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.osd_id": "1",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.type": "block",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.vdo": "0",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:                 "ceph.with_tpm": "0"
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             },
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "type": "block",
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:             "vg_name": "ceph_vg0"
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:         }
Feb 02 11:41:37 compute-0 zealous_franklin[268019]:     ]
Feb 02 11:41:37 compute-0 zealous_franklin[268019]: }
Feb 02 11:41:37 compute-0 systemd[1]: libpod-392d7d8a92117de3f66fa96d99ab05d980e449cece8d38448424c45d9319b31d.scope: Deactivated successfully.
Feb 02 11:41:37 compute-0 podman[268002]: 2026-02-02 11:41:37.040356537 +0000 UTC m=+0.453480563 container died 392d7d8a92117de3f66fa96d99ab05d980e449cece8d38448424c45d9319b31d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-12fce321d64d2033b185ec4dabcdeb3b5f14f757e535b9865c76b4146e361e8a-merged.mount: Deactivated successfully.
Feb 02 11:41:37 compute-0 podman[268002]: 2026-02-02 11:41:37.081963649 +0000 UTC m=+0.495087655 container remove 392d7d8a92117de3f66fa96d99ab05d980e449cece8d38448424c45d9319b31d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_franklin, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:41:37 compute-0 systemd[1]: libpod-conmon-392d7d8a92117de3f66fa96d99ab05d980e449cece8d38448424c45d9319b31d.scope: Deactivated successfully.
Feb 02 11:41:37 compute-0 sudo[267875]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:37.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:41:37 compute-0 sudo[268039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:41:37 compute-0 sudo[268039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:37 compute-0 sudo[268039]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:37 compute-0 sudo[268064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:41:37 compute-0 sudo[268064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:41:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:37.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:41:37 compute-0 ceph-mon[74676]: pgmap v897: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.9 MiB/s wr, 30 op/s
Feb 02 11:41:37 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2066793531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:41:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:37.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:37 compute-0 podman[268133]: 2026-02-02 11:41:37.673827087 +0000 UTC m=+0.043023474 container create 546c3ffad2f4cb55a8c637183f9e8ebcb3f2550c976b8882e8b709cd6d07f4ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 11:41:37 compute-0 systemd[1]: Started libpod-conmon-546c3ffad2f4cb55a8c637183f9e8ebcb3f2550c976b8882e8b709cd6d07f4ee.scope.
Feb 02 11:41:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:41:37 compute-0 podman[268133]: 2026-02-02 11:41:37.656869161 +0000 UTC m=+0.026065558 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:41:37 compute-0 podman[268133]: 2026-02-02 11:41:37.763463305 +0000 UTC m=+0.132659702 container init 546c3ffad2f4cb55a8c637183f9e8ebcb3f2550c976b8882e8b709cd6d07f4ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_blackwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:41:37 compute-0 podman[268133]: 2026-02-02 11:41:37.771516876 +0000 UTC m=+0.140713253 container start 546c3ffad2f4cb55a8c637183f9e8ebcb3f2550c976b8882e8b709cd6d07f4ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:41:37 compute-0 podman[268133]: 2026-02-02 11:41:37.77584822 +0000 UTC m=+0.145044617 container attach 546c3ffad2f4cb55a8c637183f9e8ebcb3f2550c976b8882e8b709cd6d07f4ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:41:37 compute-0 nervous_blackwell[268150]: 167 167
Feb 02 11:41:37 compute-0 systemd[1]: libpod-546c3ffad2f4cb55a8c637183f9e8ebcb3f2550c976b8882e8b709cd6d07f4ee.scope: Deactivated successfully.
Feb 02 11:41:37 compute-0 conmon[268150]: conmon 546c3ffad2f4cb55a8c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-546c3ffad2f4cb55a8c637183f9e8ebcb3f2550c976b8882e8b709cd6d07f4ee.scope/container/memory.events
Feb 02 11:41:37 compute-0 podman[268133]: 2026-02-02 11:41:37.779629358 +0000 UTC m=+0.148825755 container died 546c3ffad2f4cb55a8c637183f9e8ebcb3f2550c976b8882e8b709cd6d07f4ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_blackwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb 02 11:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae204effcbec37f7c5cae218911186bba3e804c522c55faf796a893635fd671b-merged.mount: Deactivated successfully.
Feb 02 11:41:37 compute-0 podman[268133]: 2026-02-02 11:41:37.823963309 +0000 UTC m=+0.193159686 container remove 546c3ffad2f4cb55a8c637183f9e8ebcb3f2550c976b8882e8b709cd6d07f4ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_blackwell, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:41:37 compute-0 systemd[1]: libpod-conmon-546c3ffad2f4cb55a8c637183f9e8ebcb3f2550c976b8882e8b709cd6d07f4ee.scope: Deactivated successfully.
Feb 02 11:41:37 compute-0 podman[268172]: 2026-02-02 11:41:37.965558855 +0000 UTC m=+0.044601548 container create 9208af6add6b8252443e4ee744bff91d7dca48bc44eddaccc461daad56cf3e7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:41:38 compute-0 systemd[1]: Started libpod-conmon-9208af6add6b8252443e4ee744bff91d7dca48bc44eddaccc461daad56cf3e7d.scope.
Feb 02 11:41:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8116c5fe78002a8045b66290b9d8b1c87a78a228f4f9e262bdbaa5c77283c7eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8116c5fe78002a8045b66290b9d8b1c87a78a228f4f9e262bdbaa5c77283c7eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8116c5fe78002a8045b66290b9d8b1c87a78a228f4f9e262bdbaa5c77283c7eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8116c5fe78002a8045b66290b9d8b1c87a78a228f4f9e262bdbaa5c77283c7eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:41:38 compute-0 podman[268172]: 2026-02-02 11:41:37.947614951 +0000 UTC m=+0.026657664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:41:38 compute-0 podman[268172]: 2026-02-02 11:41:38.042587362 +0000 UTC m=+0.121630075 container init 9208af6add6b8252443e4ee744bff91d7dca48bc44eddaccc461daad56cf3e7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:41:38 compute-0 podman[268172]: 2026-02-02 11:41:38.04981834 +0000 UTC m=+0.128861033 container start 9208af6add6b8252443e4ee744bff91d7dca48bc44eddaccc461daad56cf3e7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:41:38 compute-0 podman[268172]: 2026-02-02 11:41:38.054242196 +0000 UTC m=+0.133284909 container attach 9208af6add6b8252443e4ee744bff91d7dca48bc44eddaccc461daad56cf3e7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_kowalevski, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 11:41:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 29 op/s
Feb 02 11:41:38 compute-0 lvm[268260]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:41:38 compute-0 lvm[268260]: VG ceph_vg0 finished
Feb 02 11:41:38 compute-0 romantic_kowalevski[268186]: {}
Feb 02 11:41:38 compute-0 systemd[1]: libpod-9208af6add6b8252443e4ee744bff91d7dca48bc44eddaccc461daad56cf3e7d.scope: Deactivated successfully.
Feb 02 11:41:38 compute-0 systemd[1]: libpod-9208af6add6b8252443e4ee744bff91d7dca48bc44eddaccc461daad56cf3e7d.scope: Consumed 1.080s CPU time.
Feb 02 11:41:38 compute-0 podman[268172]: 2026-02-02 11:41:38.809786403 +0000 UTC m=+0.888829106 container died 9208af6add6b8252443e4ee744bff91d7dca48bc44eddaccc461daad56cf3e7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_kowalevski, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8116c5fe78002a8045b66290b9d8b1c87a78a228f4f9e262bdbaa5c77283c7eb-merged.mount: Deactivated successfully.
Feb 02 11:41:38 compute-0 podman[268172]: 2026-02-02 11:41:38.86236361 +0000 UTC m=+0.941406303 container remove 9208af6add6b8252443e4ee744bff91d7dca48bc44eddaccc461daad56cf3e7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_kowalevski, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:41:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:38.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:41:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:38.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:41:38 compute-0 systemd[1]: libpod-conmon-9208af6add6b8252443e4ee744bff91d7dca48bc44eddaccc461daad56cf3e7d.scope: Deactivated successfully.
Feb 02 11:41:38 compute-0 sudo[268064]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:41:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:41:38 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:39 compute-0 sudo[268277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:41:39 compute-0 sudo[268277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:39 compute-0 sudo[268277]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:39 compute-0 nova_compute[251290]: 2026-02-02 11:41:39.437 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:41:39 compute-0 nova_compute[251290]: 2026-02-02 11:41:39.438 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:41:39 compute-0 nova_compute[251290]: 2026-02-02 11:41:39.438 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:41:39 compute-0 nova_compute[251290]: 2026-02-02 11:41:39.438 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:41:39 compute-0 nova_compute[251290]: 2026-02-02 11:41:39.438 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:41:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:39.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:39 compute-0 ceph-mon[74676]: pgmap v898: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 29 op/s
Feb 02 11:41:39 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:39 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:41:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:39.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:40 compute-0 nova_compute[251290]: 2026-02-02 11:41:40.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:41:40 compute-0 nova_compute[251290]: 2026-02-02 11:41:40.033 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:41:40 compute-0 nova_compute[251290]: 2026-02-02 11:41:40.033 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:41:40 compute-0 nova_compute[251290]: 2026-02-02 11:41:40.034 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:41:40 compute-0 nova_compute[251290]: 2026-02-02 11:41:40.051 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:41:40 compute-0 nova_compute[251290]: 2026-02-02 11:41:40.315 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 29 op/s
Feb 02 11:41:40 compute-0 nova_compute[251290]: 2026-02-02 11:41:40.661 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:41.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:41 compute-0 ceph-mon[74676]: pgmap v899: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 29 op/s
Feb 02 11:41:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:41.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Feb 02 11:41:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:43.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:43 compute-0 ceph-mon[74676]: pgmap v900: 353 pgs: 353 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Feb 02 11:41:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:43.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:41:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2651985534' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:41:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:41:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2651985534' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:41:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Feb 02 11:41:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1111452106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:41:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2651985534' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:41:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2651985534' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:41:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:41:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:41:45 compute-0 nova_compute[251290]: 2026-02-02 11:41:45.318 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:45.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:45.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:45 compute-0 ceph-mon[74676]: pgmap v901: 353 pgs: 353 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Feb 02 11:41:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:41:45 compute-0 nova_compute[251290]: 2026-02-02 11:41:45.665 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 02 11:41:46 compute-0 ceph-mon[74676]: pgmap v902: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.626781) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032506626825, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2113, "num_deletes": 251, "total_data_size": 4126300, "memory_usage": 4193440, "flush_reason": "Manual Compaction"}
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032506655647, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3994949, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24733, "largest_seqno": 26845, "table_properties": {"data_size": 3985624, "index_size": 5755, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19908, "raw_average_key_size": 20, "raw_value_size": 3966711, "raw_average_value_size": 4051, "num_data_blocks": 253, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032303, "oldest_key_time": 1770032303, "file_creation_time": 1770032506, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 28919 microseconds, and 7228 cpu microseconds.
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.655700) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3994949 bytes OK
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.655720) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.657789) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.657867) EVENT_LOG_v1 {"time_micros": 1770032506657851, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.657905) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4117534, prev total WAL file size 4117534, number of live WAL files 2.
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.658856) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3901KB)], [56(11MB)]
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032506658929, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16566757, "oldest_snapshot_seqno": -1}
Feb 02 11:41:46 compute-0 sudo[268311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:41:46 compute-0 sudo[268311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:41:46 compute-0 sudo[268311]: pam_unix(sudo:session): session closed for user root
Feb 02 11:41:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5859 keys, 14413609 bytes, temperature: kUnknown
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032506756956, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14413609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14374043, "index_size": 23840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 149035, "raw_average_key_size": 25, "raw_value_size": 14267831, "raw_average_value_size": 2435, "num_data_blocks": 971, "num_entries": 5859, "num_filter_entries": 5859, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770032506, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.757238) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14413609 bytes
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.761896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.9 rd, 146.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 12.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 6377, records dropped: 518 output_compression: NoCompression
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.761926) EVENT_LOG_v1 {"time_micros": 1770032506761912, "job": 30, "event": "compaction_finished", "compaction_time_micros": 98102, "compaction_time_cpu_micros": 26499, "output_level": 6, "num_output_files": 1, "total_output_size": 14413609, "num_input_records": 6377, "num_output_records": 5859, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032506762678, "job": 30, "event": "table_file_deletion", "file_number": 58}
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032506764570, "job": 30, "event": "table_file_deletion", "file_number": 56}
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.658768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.764674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.764681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.764683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.764685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:41:46 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:41:46.764687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:41:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:46] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:41:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:46] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:41:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:47.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:41:47 compute-0 sshd-session[268309]: Invalid user lighthouse from 80.94.92.186 port 34254
Feb 02 11:41:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:47.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:47 compute-0 sshd-session[268309]: Connection closed by invalid user lighthouse 80.94.92.186 port 34254 [preauth]
Feb 02 11:41:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:47.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:48 compute-0 podman[268338]: 2026-02-02 11:41:48.296552 +0000 UTC m=+0.082626569 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:41:48 compute-0 podman[268339]: 2026-02-02 11:41:48.302629454 +0000 UTC m=+0.088711204 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:41:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:48.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:41:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:49.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:49 compute-0 ceph-mon[74676]: pgmap v903: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:49.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:50 compute-0 nova_compute[251290]: 2026-02-02 11:41:50.320 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:50 compute-0 nova_compute[251290]: 2026-02-02 11:41:50.667 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:51.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:51 compute-0 ceph-mon[74676]: pgmap v904: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:51.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:53.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:53 compute-0 ceph-mon[74676]: pgmap v905: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 02 11:41:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:53.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 597 B/s wr, 13 op/s
Feb 02 11:41:54 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:41:54.762 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:41:54 compute-0 nova_compute[251290]: 2026-02-02 11:41:54.763 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:54 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:41:54.764 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:41:54 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:41:54.764 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:41:55 compute-0 nova_compute[251290]: 2026-02-02 11:41:55.322 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:55.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:55 compute-0 ceph-mon[74676]: pgmap v906: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 597 B/s wr, 13 op/s
Feb 02 11:41:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:55.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:55 compute-0 nova_compute[251290]: 2026-02-02 11:41:55.708 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:41:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:41:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:41:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:41:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:41:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:41:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 597 B/s wr, 13 op/s
Feb 02 11:41:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:41:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:56] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:41:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:41:56] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:41:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:57.162Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:41:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:57.162Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:41:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:57.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:41:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:57.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:57 compute-0 ceph-mon[74676]: pgmap v907: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 597 B/s wr, 13 op/s
Feb 02 11:41:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:41:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:57.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:41:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:41:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:41:58.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:41:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:41:59.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:59 compute-0 ceph-mon[74676]: pgmap v908: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:41:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:41:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:41:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:41:59.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:41:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:41:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:41:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:41:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:41:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:41:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:41:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:41:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:42:00 compute-0 nova_compute[251290]: 2026-02-02 11:42:00.325 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:42:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:42:00 compute-0 nova_compute[251290]: 2026-02-02 11:42:00.710 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:01.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:01 compute-0 ceph-mon[74676]: pgmap v909: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:42:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:01.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:42:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:03.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:03 compute-0 ceph-mon[74676]: pgmap v910: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:42:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:03.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:42:04 compute-0 ceph-mon[74676]: pgmap v911: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:42:05 compute-0 nova_compute[251290]: 2026-02-02 11:42:05.327 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:05.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:05.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:05 compute-0 nova_compute[251290]: 2026-02-02 11:42:05.712 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:42:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:06 compute-0 sudo[268399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:42:06 compute-0 sudo[268399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:06 compute-0 sudo[268399]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:06] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Feb 02 11:42:07 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:06] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Feb 02 11:42:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:07.163Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:42:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:07.164Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:07 compute-0 ceph-mon[74676]: pgmap v912: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:42:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:07.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:07 compute-0 nova_compute[251290]: 2026-02-02 11:42:07.542 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "0f8aafba-a457-4636-9680-282892d6ab4a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:07 compute-0 nova_compute[251290]: 2026-02-02 11:42:07.543 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:07 compute-0 nova_compute[251290]: 2026-02-02 11:42:07.566 251294 DEBUG nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 11:42:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:07.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:07 compute-0 nova_compute[251290]: 2026-02-02 11:42:07.645 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:07 compute-0 nova_compute[251290]: 2026-02-02 11:42:07.646 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:07 compute-0 nova_compute[251290]: 2026-02-02 11:42:07.654 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 11:42:07 compute-0 nova_compute[251290]: 2026-02-02 11:42:07.655 251294 INFO nova.compute.claims [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Claim successful on node compute-0.ctlplane.example.com
Feb 02 11:42:07 compute-0 nova_compute[251290]: 2026-02-02 11:42:07.752 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:42:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:42:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3673122182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.308 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.314 251294 DEBUG nova.compute.provider_tree [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.329 251294 DEBUG nova.scheduler.client.report [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.353 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.354 251294 DEBUG nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 11:42:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.412 251294 DEBUG nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.412 251294 DEBUG nova.network.neutron [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.434 251294 INFO nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 11:42:08 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3673122182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.461 251294 DEBUG nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.555 251294 DEBUG nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.556 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.557 251294 INFO nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Creating image(s)
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.589 251294 DEBUG nova.storage.rbd_utils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 0f8aafba-a457-4636-9680-282892d6ab4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.621 251294 DEBUG nova.storage.rbd_utils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 0f8aafba-a457-4636-9680-282892d6ab4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.655 251294 DEBUG nova.storage.rbd_utils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 0f8aafba-a457-4636-9680-282892d6ab4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.660 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.717 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.718 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.719 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.719 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.753 251294 DEBUG nova.storage.rbd_utils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 0f8aafba-a457-4636-9680-282892d6ab4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.760 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 0f8aafba-a457-4636-9680-282892d6ab4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:42:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:08.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:42:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:08.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:42:08 compute-0 nova_compute[251290]: 2026-02-02 11:42:08.884 251294 DEBUG nova.policy [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abee87546a344ef285e2e269d2c74792', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3240aa599bd249a3b72e42fcc47af557', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 11:42:09 compute-0 nova_compute[251290]: 2026-02-02 11:42:09.041 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 0f8aafba-a457-4636-9680-282892d6ab4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:42:09 compute-0 nova_compute[251290]: 2026-02-02 11:42:09.139 251294 DEBUG nova.storage.rbd_utils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] resizing rbd image 0f8aafba-a457-4636-9680-282892d6ab4a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 11:42:09 compute-0 nova_compute[251290]: 2026-02-02 11:42:09.279 251294 DEBUG nova.objects.instance [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f8aafba-a457-4636-9680-282892d6ab4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:42:09 compute-0 nova_compute[251290]: 2026-02-02 11:42:09.299 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 11:42:09 compute-0 nova_compute[251290]: 2026-02-02 11:42:09.300 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Ensure instance console log exists: /var/lib/nova/instances/0f8aafba-a457-4636-9680-282892d6ab4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 11:42:09 compute-0 nova_compute[251290]: 2026-02-02 11:42:09.301 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:09 compute-0 nova_compute[251290]: 2026-02-02 11:42:09.301 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:09 compute-0 nova_compute[251290]: 2026-02-02 11:42:09.302 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:09 compute-0 ceph-mon[74676]: pgmap v913: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:42:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:09.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:09.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:09 compute-0 nova_compute[251290]: 2026-02-02 11:42:09.761 251294 DEBUG nova.network.neutron [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Successfully created port: b82b20ab-af9a-43b9-a706-bb34ec1624ce _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 11:42:10 compute-0 nova_compute[251290]: 2026-02-02 11:42:10.329 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:42:10 compute-0 nova_compute[251290]: 2026-02-02 11:42:10.682 251294 DEBUG nova.network.neutron [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Successfully updated port: b82b20ab-af9a-43b9-a706-bb34ec1624ce _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 11:42:10 compute-0 nova_compute[251290]: 2026-02-02 11:42:10.699 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:42:10 compute-0 nova_compute[251290]: 2026-02-02 11:42:10.700 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquired lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:42:10 compute-0 nova_compute[251290]: 2026-02-02 11:42:10.700 251294 DEBUG nova.network.neutron [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 11:42:10 compute-0 nova_compute[251290]: 2026-02-02 11:42:10.715 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:10 compute-0 nova_compute[251290]: 2026-02-02 11:42:10.835 251294 DEBUG nova.compute.manager [req-afc1ec16-336c-46b5-8c63-40383f317ca6 req-30390406-fc86-46c1-acc9-52fd315fde7f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received event network-changed-b82b20ab-af9a-43b9-a706-bb34ec1624ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:42:10 compute-0 nova_compute[251290]: 2026-02-02 11:42:10.836 251294 DEBUG nova.compute.manager [req-afc1ec16-336c-46b5-8c63-40383f317ca6 req-30390406-fc86-46c1-acc9-52fd315fde7f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Refreshing instance network info cache due to event network-changed-b82b20ab-af9a-43b9-a706-bb34ec1624ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:42:10 compute-0 nova_compute[251290]: 2026-02-02 11:42:10.836 251294 DEBUG oslo_concurrency.lockutils [req-afc1ec16-336c-46b5-8c63-40383f317ca6 req-30390406-fc86-46c1-acc9-52fd315fde7f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:42:10 compute-0 nova_compute[251290]: 2026-02-02 11:42:10.911 251294 DEBUG nova.network.neutron [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 11:42:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:11 compute-0 ceph-mon[74676]: pgmap v914: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:42:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:42:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:11.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:42:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:11.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.317 251294 DEBUG nova.network.neutron [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Updating instance_info_cache with network_info: [{"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.343 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Releasing lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.344 251294 DEBUG nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Instance network_info: |[{"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.344 251294 DEBUG oslo_concurrency.lockutils [req-afc1ec16-336c-46b5-8c63-40383f317ca6 req-30390406-fc86-46c1-acc9-52fd315fde7f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.345 251294 DEBUG nova.network.neutron [req-afc1ec16-336c-46b5-8c63-40383f317ca6 req-30390406-fc86-46c1-acc9-52fd315fde7f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Refreshing network info cache for port b82b20ab-af9a-43b9-a706-bb34ec1624ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.347 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Start _get_guest_xml network_info=[{"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:33:57Z,direct_url=<?>,disk_format='qcow2',id=8a4b36bd-584f-4a0a-aab3-55c0b12d2d97,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='298a2ae7f4e04d87bebf3a1c7834ef26',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:34:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '8a4b36bd-584f-4a0a-aab3-55c0b12d2d97'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.352 251294 WARNING nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.361 251294 DEBUG nova.virt.libvirt.host [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.362 251294 DEBUG nova.virt.libvirt.host [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.368 251294 DEBUG nova.virt.libvirt.host [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.369 251294 DEBUG nova.virt.libvirt.host [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.369 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.370 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:33:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5413fce8-24ad-46a1-a21e-000a8299c8f6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:33:57Z,direct_url=<?>,disk_format='qcow2',id=8a4b36bd-584f-4a0a-aab3-55c0b12d2d97,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='298a2ae7f4e04d87bebf3a1c7834ef26',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:34:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.370 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.370 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.371 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.371 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.371 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.371 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.371 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.372 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.372 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.372 251294 DEBUG nova.virt.hardware [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 11:42:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.375 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:42:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 11:42:12 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071678309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.882 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.914 251294 DEBUG nova.storage.rbd_utils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 0f8aafba-a457-4636-9680-282892d6ab4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:42:12 compute-0 nova_compute[251290]: 2026-02-02 11:42:12.919 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:42:13 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 11:42:13 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2775829539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.396 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.398 251294 DEBUG nova.virt.libvirt.vif [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:42:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-90669998',display_name='tempest-TestNetworkBasicOps-server-90669998',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-90669998',id=10,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH7AYwk3S4KClmPtHKdjl+uWO9slW5bEgBmgkhf3Zf5ReJTfs28+H0FVQNO5XH303KjC3bOfUKkNsAk7HeguQTQsxi/gMlL6eFuYT31TuapZa7PnX5MQsq3SNyrRbxhLvQ==',key_name='tempest-TestNetworkBasicOps-1560896376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-cusfi5f1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:42:08Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=0f8aafba-a457-4636-9680-282892d6ab4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.398 251294 DEBUG nova.network.os_vif_util [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.399 251294 DEBUG nova.network.os_vif_util [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:77:3f,bridge_name='br-int',has_traffic_filtering=True,id=b82b20ab-af9a-43b9-a706-bb34ec1624ce,network=Network(07ef7365-e94e-44a9-9670-d305a9339f4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82b20ab-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.400 251294 DEBUG nova.objects.instance [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f8aafba-a457-4636-9680-282892d6ab4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.423 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] End _get_guest_xml xml=<domain type="kvm">
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <uuid>0f8aafba-a457-4636-9680-282892d6ab4a</uuid>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <name>instance-0000000a</name>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <memory>131072</memory>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <vcpu>1</vcpu>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <nova:name>tempest-TestNetworkBasicOps-server-90669998</nova:name>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <nova:creationTime>2026-02-02 11:42:12</nova:creationTime>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <nova:flavor name="m1.nano">
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <nova:memory>128</nova:memory>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <nova:disk>1</nova:disk>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <nova:swap>0</nova:swap>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <nova:vcpus>1</nova:vcpus>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       </nova:flavor>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <nova:owner>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       </nova:owner>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <nova:ports>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <nova:port uuid="b82b20ab-af9a-43b9-a706-bb34ec1624ce">
Feb 02 11:42:13 compute-0 nova_compute[251290]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         </nova:port>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       </nova:ports>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     </nova:instance>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <sysinfo type="smbios">
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <system>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <entry name="manufacturer">RDO</entry>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <entry name="product">OpenStack Compute</entry>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <entry name="serial">0f8aafba-a457-4636-9680-282892d6ab4a</entry>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <entry name="uuid">0f8aafba-a457-4636-9680-282892d6ab4a</entry>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <entry name="family">Virtual Machine</entry>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     </system>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <os>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <boot dev="hd"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <smbios mode="sysinfo"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   </os>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <features>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <vmcoreinfo/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   </features>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <clock offset="utc">
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <timer name="hpet" present="no"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <cpu mode="host-model" match="exact">
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <disk type="network" device="disk">
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <driver type="raw" cache="none"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <source protocol="rbd" name="vms/0f8aafba-a457-4636-9680-282892d6ab4a_disk">
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <host name="192.168.122.100" port="6789"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <host name="192.168.122.102" port="6789"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <host name="192.168.122.101" port="6789"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       </source>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <auth username="openstack">
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <secret type="ceph" uuid="1d33f80b-d6ca-501c-bac7-184379b89279"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <target dev="vda" bus="virtio"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <disk type="network" device="cdrom">
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <driver type="raw" cache="none"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <source protocol="rbd" name="vms/0f8aafba-a457-4636-9680-282892d6ab4a_disk.config">
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <host name="192.168.122.100" port="6789"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <host name="192.168.122.102" port="6789"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <host name="192.168.122.101" port="6789"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       </source>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <auth username="openstack">
Feb 02 11:42:13 compute-0 nova_compute[251290]:         <secret type="ceph" uuid="1d33f80b-d6ca-501c-bac7-184379b89279"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <target dev="sda" bus="sata"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <interface type="ethernet">
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <mac address="fa:16:3e:53:77:3f"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <model type="virtio"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <mtu size="1442"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <target dev="tapb82b20ab-af"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <serial type="pty">
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <log file="/var/lib/nova/instances/0f8aafba-a457-4636-9680-282892d6ab4a/console.log" append="off"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <video>
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <model type="virtio"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     </video>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <input type="tablet" bus="usb"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <rng model="virtio">
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <backend model="random">/dev/urandom</backend>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <controller type="usb" index="0"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     <memballoon model="virtio">
Feb 02 11:42:13 compute-0 nova_compute[251290]:       <stats period="10"/>
Feb 02 11:42:13 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:42:13 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:42:13 compute-0 nova_compute[251290]: </domain>
Feb 02 11:42:13 compute-0 nova_compute[251290]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.424 251294 DEBUG nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Preparing to wait for external event network-vif-plugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.425 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.425 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.425 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.426 251294 DEBUG nova.virt.libvirt.vif [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:42:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-90669998',display_name='tempest-TestNetworkBasicOps-server-90669998',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-90669998',id=10,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH7AYwk3S4KClmPtHKdjl+uWO9slW5bEgBmgkhf3Zf5ReJTfs28+H0FVQNO5XH303KjC3bOfUKkNsAk7HeguQTQsxi/gMlL6eFuYT31TuapZa7PnX5MQsq3SNyrRbxhLvQ==',key_name='tempest-TestNetworkBasicOps-1560896376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-cusfi5f1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:42:08Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=0f8aafba-a457-4636-9680-282892d6ab4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.426 251294 DEBUG nova.network.os_vif_util [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.427 251294 DEBUG nova.network.os_vif_util [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:77:3f,bridge_name='br-int',has_traffic_filtering=True,id=b82b20ab-af9a-43b9-a706-bb34ec1624ce,network=Network(07ef7365-e94e-44a9-9670-d305a9339f4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82b20ab-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.427 251294 DEBUG os_vif [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:77:3f,bridge_name='br-int',has_traffic_filtering=True,id=b82b20ab-af9a-43b9-a706-bb34ec1624ce,network=Network(07ef7365-e94e-44a9-9670-d305a9339f4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82b20ab-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.428 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.428 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.429 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.432 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.432 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb82b20ab-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.433 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb82b20ab-af, col_values=(('external_ids', {'iface-id': 'b82b20ab-af9a-43b9-a706-bb34ec1624ce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:77:3f', 'vm-uuid': '0f8aafba-a457-4636-9680-282892d6ab4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.434 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:13 compute-0 NetworkManager[49067]: <info>  [1770032533.4358] manager: (tapb82b20ab-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.438 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.442 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.444 251294 INFO os_vif [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:77:3f,bridge_name='br-int',has_traffic_filtering=True,id=b82b20ab-af9a-43b9-a706-bb34ec1624ce,network=Network(07ef7365-e94e-44a9-9670-d305a9339f4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82b20ab-af')
Feb 02 11:42:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:13.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:13 compute-0 ceph-mon[74676]: pgmap v915: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:42:13 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3071678309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:42:13 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2775829539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.508 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.509 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.509 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No VIF found with MAC fa:16:3e:53:77:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.509 251294 INFO nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Using config drive
Feb 02 11:42:13 compute-0 nova_compute[251290]: 2026-02-02 11:42:13.537 251294 DEBUG nova.storage.rbd_utils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 0f8aafba-a457-4636-9680-282892d6ab4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:42:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:13.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.106 251294 INFO nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Creating config drive at /var/lib/nova/instances/0f8aafba-a457-4636-9680-282892d6ab4a/disk.config
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.110 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f8aafba-a457-4636-9680-282892d6ab4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpytm7rhmi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.129 251294 DEBUG nova.network.neutron [req-afc1ec16-336c-46b5-8c63-40383f317ca6 req-30390406-fc86-46c1-acc9-52fd315fde7f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Updated VIF entry in instance network info cache for port b82b20ab-af9a-43b9-a706-bb34ec1624ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.130 251294 DEBUG nova.network.neutron [req-afc1ec16-336c-46b5-8c63-40383f317ca6 req-30390406-fc86-46c1-acc9-52fd315fde7f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Updating instance_info_cache with network_info: [{"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.152 251294 DEBUG oslo_concurrency.lockutils [req-afc1ec16-336c-46b5-8c63-40383f317ca6 req-30390406-fc86-46c1-acc9-52fd315fde7f 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.235 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f8aafba-a457-4636-9680-282892d6ab4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpytm7rhmi" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.269 251294 DEBUG nova.storage.rbd_utils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 0f8aafba-a457-4636-9680-282892d6ab4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.277 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f8aafba-a457-4636-9680-282892d6ab4a/disk.config 0f8aafba-a457-4636-9680-282892d6ab4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:42:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.462 251294 DEBUG oslo_concurrency.processutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f8aafba-a457-4636-9680-282892d6ab4a/disk.config 0f8aafba-a457-4636-9680-282892d6ab4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.464 251294 INFO nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Deleting local config drive /var/lib/nova/instances/0f8aafba-a457-4636-9680-282892d6ab4a/disk.config because it was imported into RBD.
Feb 02 11:42:14 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 02 11:42:14 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 02 11:42:14 compute-0 kernel: tapb82b20ab-af: entered promiscuous mode
Feb 02 11:42:14 compute-0 NetworkManager[49067]: <info>  [1770032534.5590] manager: (tapb82b20ab-af): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Feb 02 11:42:14 compute-0 systemd-udevd[268771]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:42:14 compute-0 ovn_controller[154901]: 2026-02-02T11:42:14Z|00078|binding|INFO|Claiming lport b82b20ab-af9a-43b9-a706-bb34ec1624ce for this chassis.
Feb 02 11:42:14 compute-0 ovn_controller[154901]: 2026-02-02T11:42:14Z|00079|binding|INFO|b82b20ab-af9a-43b9-a706-bb34ec1624ce: Claiming fa:16:3e:53:77:3f 10.100.0.5
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.600 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.605 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:14 compute-0 NetworkManager[49067]: <info>  [1770032534.6146] device (tapb82b20ab-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 11:42:14 compute-0 NetworkManager[49067]: <info>  [1770032534.6154] device (tapb82b20ab-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 11:42:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:42:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:42:14 compute-0 ovn_controller[154901]: 2026-02-02T11:42:14Z|00080|binding|INFO|Setting lport b82b20ab-af9a-43b9-a706-bb34ec1624ce ovn-installed in OVS
Feb 02 11:42:14 compute-0 nova_compute[251290]: 2026-02-02 11:42:14.636 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:14 compute-0 systemd-machined[218018]: New machine qemu-4-instance-0000000a.
Feb 02 11:42:14 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Feb 02 11:42:14 compute-0 ovn_controller[154901]: 2026-02-02T11:42:14Z|00081|binding|INFO|Setting lport b82b20ab-af9a-43b9-a706-bb34ec1624ce up in Southbound
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.892 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:77:3f 10.100.0.5'], port_security=['fa:16:3e:53:77:3f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0f8aafba-a457-4636-9680-282892d6ab4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07ef7365-e94e-44a9-9670-d305a9339f4f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8c5f2d68-54aa-4a79-9d3d-e6c7b420c784', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efa3babe-77cb-4d79-b297-03518568f04f, chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=b82b20ab-af9a-43b9-a706-bb34ec1624ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.893 165304 INFO neutron.agent.ovn.metadata.agent [-] Port b82b20ab-af9a-43b9-a706-bb34ec1624ce in datapath 07ef7365-e94e-44a9-9670-d305a9339f4f bound to our chassis
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.895 165304 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 07ef7365-e94e-44a9-9670-d305a9339f4f
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.909 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b596be-464d-4181-91f8-f7190f2145ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.911 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap07ef7365-e1 in ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.914 258380 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap07ef7365-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.914 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[a86af42f-7f8a-4cb7-9a24-154c49a753a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.915 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[e1442797-bc90-41d1-9392-772ee99c4788]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.933 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[4611c03d-87fb-4474-96b4-f813514ccadd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.948 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[1458db5f-a325-46d7-a9c2-25c2ea03f05c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.980 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[81eaa935-18ca-4f57-b992-55c522342763]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:14 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:14.985 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[9c90b902-95fe-46a2-a6f7-ca024a7c0732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:14 compute-0 systemd-udevd[268773]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:42:14 compute-0 NetworkManager[49067]: <info>  [1770032534.9928] manager: (tap07ef7365-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.021 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[adfe96ec-6f07-4f71-82fc-1a0619a7ad5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.025 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[f2cc4855-1aaf-4bc0-ab88-bbd8ff7b7f3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:15 compute-0 NetworkManager[49067]: <info>  [1770032535.0520] device (tap07ef7365-e0): carrier: link connected
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.058 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[92bb01fb-b7b8-4c04-8fe5-87a17b642d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.084 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[2a3b7e23-8cf9-4dd9-809c-5c4f288a826a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07ef7365-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c6:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418931, 'reachable_time': 21351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268823, 'error': None, 'target': 'ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.104 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[65df1b1c-5205-477a-9d0f-694c6b9e02fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:c61c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418931, 'tstamp': 418931}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268827, 'error': None, 'target': 'ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.124 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9b7927-ade9-4c0d-9d24-dbdc1d34a5a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07ef7365-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c6:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418931, 'reachable_time': 21351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268843, 'error': None, 'target': 'ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.157 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[998213c7-f3df-4849-bfe0-c1506884058b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.216 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[5baa699c-97a3-4bd2-ba28-ddd03a320955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.220 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07ef7365-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.220 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.221 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07ef7365-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.223 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:15 compute-0 NetworkManager[49067]: <info>  [1770032535.2239] manager: (tap07ef7365-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Feb 02 11:42:15 compute-0 kernel: tap07ef7365-e0: entered promiscuous mode
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.227 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap07ef7365-e0, col_values=(('external_ids', {'iface-id': '88dfe7f1-6b98-403c-9895-7397b377226b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:42:15 compute-0 ovn_controller[154901]: 2026-02-02T11:42:15Z|00082|binding|INFO|Releasing lport 88dfe7f1-6b98-403c-9895-7397b377226b from this chassis (sb_readonly=0)
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.229 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.232 165304 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/07ef7365-e94e-44a9-9670-d305a9339f4f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/07ef7365-e94e-44a9-9670-d305a9339f4f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.233 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[0395ac20-812a-476d-a0a6-f5cac43cba29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.235 165304 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: global
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     log         /dev/log local0 debug
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     log-tag     haproxy-metadata-proxy-07ef7365-e94e-44a9-9670-d305a9339f4f
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     user        root
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     group       root
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     maxconn     1024
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     pidfile     /var/lib/neutron/external/pids/07ef7365-e94e-44a9-9670-d305a9339f4f.pid.haproxy
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     daemon
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: defaults
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     log global
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     mode http
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     option httplog
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     option dontlognull
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     option http-server-close
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     option forwardfor
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     retries                 3
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     timeout http-request    30s
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     timeout connect         30s
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     timeout client          32s
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     timeout server          32s
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     timeout http-keep-alive 30s
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: listen listener
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     bind 169.254.169.254:80
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:     http-request add-header X-OVN-Network-ID 07ef7365-e94e-44a9-9670-d305a9339f4f
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.235 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:15 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:15.236 165304 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f', 'env', 'PROCESS_TAG=haproxy-07ef7365-e94e-44a9-9670-d305a9339f4f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/07ef7365-e94e-44a9-9670-d305a9339f4f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.261 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032535.2600174, 0f8aafba-a457-4636-9680-282892d6ab4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.262 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] VM Started (Lifecycle Event)
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.300 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.305 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032535.2610025, 0f8aafba-a457-4636-9680-282892d6ab4a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.306 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] VM Paused (Lifecycle Event)
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.341 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.346 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.374 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 11:42:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:15.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:15 compute-0 ceph-mon[74676]: pgmap v916: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:42:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:42:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:15.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:15 compute-0 podman[268885]: 2026-02-02 11:42:15.640115712 +0000 UTC m=+0.062580605 container create ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 11:42:15 compute-0 systemd[1]: Started libpod-conmon-ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b.scope.
Feb 02 11:42:15 compute-0 podman[268885]: 2026-02-02 11:42:15.606935111 +0000 UTC m=+0.029400034 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 11:42:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:42:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22308ff1fd0ab39fd3bbe6c04e8dfe54944935fff9690779845ac0499534a4f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:15 compute-0 nova_compute[251290]: 2026-02-02 11:42:15.716 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:15 compute-0 podman[268885]: 2026-02-02 11:42:15.722948727 +0000 UTC m=+0.145413640 container init ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 02 11:42:15 compute-0 podman[268885]: 2026-02-02 11:42:15.729056312 +0000 UTC m=+0.151521205 container start ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 02 11:42:15 compute-0 neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f[268902]: [NOTICE]   (268906) : New worker (268908) forked
Feb 02 11:42:15 compute-0 neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f[268902]: [NOTICE]   (268906) : Loading success.
Feb 02 11:42:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.107 251294 DEBUG nova.compute.manager [req-1c34b461-947f-4fc7-ab05-348aad015576 req-8281a4d1-341f-460c-8e81-6de37bf25144 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received event network-vif-plugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.107 251294 DEBUG oslo_concurrency.lockutils [req-1c34b461-947f-4fc7-ab05-348aad015576 req-8281a4d1-341f-460c-8e81-6de37bf25144 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.108 251294 DEBUG oslo_concurrency.lockutils [req-1c34b461-947f-4fc7-ab05-348aad015576 req-8281a4d1-341f-460c-8e81-6de37bf25144 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.108 251294 DEBUG oslo_concurrency.lockutils [req-1c34b461-947f-4fc7-ab05-348aad015576 req-8281a4d1-341f-460c-8e81-6de37bf25144 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.109 251294 DEBUG nova.compute.manager [req-1c34b461-947f-4fc7-ab05-348aad015576 req-8281a4d1-341f-460c-8e81-6de37bf25144 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Processing event network-vif-plugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.109 251294 DEBUG nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.114 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032536.1144674, 0f8aafba-a457-4636-9680-282892d6ab4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.115 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] VM Resumed (Lifecycle Event)
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.117 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.120 251294 INFO nova.virt.libvirt.driver [-] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Instance spawned successfully.
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.120 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.147 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.153 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.153 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.154 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.155 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.155 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.156 251294 DEBUG nova.virt.libvirt.driver [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.160 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.223 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.245 251294 INFO nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Took 7.69 seconds to spawn the instance on the hypervisor.
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.246 251294 DEBUG nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.318 251294 INFO nova.compute.manager [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Took 8.70 seconds to build instance.
Feb 02 11:42:16 compute-0 nova_compute[251290]: 2026-02-02 11:42:16.350 251294 DEBUG oslo_concurrency.lockutils [None req-78c29ced-5efe-4c61-ab11-f469e008d588 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Feb 02 11:42:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:16] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Feb 02 11:42:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:16] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Feb 02 11:42:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:17.164Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:17.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:17 compute-0 ceph-mon[74676]: pgmap v917: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Feb 02 11:42:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:17.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:18 compute-0 nova_compute[251290]: 2026-02-02 11:42:18.212 251294 DEBUG nova.compute.manager [req-2cf8ae31-6942-497e-a2f7-5a1f0571f622 req-ecf90cfb-7e40-4d1c-8d45-39e333bf14b2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received event network-vif-plugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:42:18 compute-0 nova_compute[251290]: 2026-02-02 11:42:18.212 251294 DEBUG oslo_concurrency.lockutils [req-2cf8ae31-6942-497e-a2f7-5a1f0571f622 req-ecf90cfb-7e40-4d1c-8d45-39e333bf14b2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:18 compute-0 nova_compute[251290]: 2026-02-02 11:42:18.212 251294 DEBUG oslo_concurrency.lockutils [req-2cf8ae31-6942-497e-a2f7-5a1f0571f622 req-ecf90cfb-7e40-4d1c-8d45-39e333bf14b2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:18 compute-0 nova_compute[251290]: 2026-02-02 11:42:18.213 251294 DEBUG oslo_concurrency.lockutils [req-2cf8ae31-6942-497e-a2f7-5a1f0571f622 req-ecf90cfb-7e40-4d1c-8d45-39e333bf14b2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:18 compute-0 nova_compute[251290]: 2026-02-02 11:42:18.213 251294 DEBUG nova.compute.manager [req-2cf8ae31-6942-497e-a2f7-5a1f0571f622 req-ecf90cfb-7e40-4d1c-8d45-39e333bf14b2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] No waiting events found dispatching network-vif-plugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:42:18 compute-0 nova_compute[251290]: 2026-02-02 11:42:18.213 251294 WARNING nova.compute.manager [req-2cf8ae31-6942-497e-a2f7-5a1f0571f622 req-ecf90cfb-7e40-4d1c-8d45-39e333bf14b2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received unexpected event network-vif-plugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce for instance with vm_state active and task_state None.
Feb 02 11:42:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 02 11:42:18 compute-0 nova_compute[251290]: 2026-02-02 11:42:18.436 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:18.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:19 compute-0 podman[268920]: 2026-02-02 11:42:19.304517221 +0000 UTC m=+0.080515499 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 02 11:42:19 compute-0 podman[268921]: 2026-02-02 11:42:19.315875706 +0000 UTC m=+0.092190003 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 11:42:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:19.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:19 compute-0 ceph-mon[74676]: pgmap v918: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 02 11:42:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:19.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:20 compute-0 ovn_controller[154901]: 2026-02-02T11:42:20Z|00083|binding|INFO|Releasing lport 88dfe7f1-6b98-403c-9895-7397b377226b from this chassis (sb_readonly=0)
Feb 02 11:42:20 compute-0 nova_compute[251290]: 2026-02-02 11:42:20.033 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:20 compute-0 NetworkManager[49067]: <info>  [1770032540.0346] manager: (patch-br-int-to-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Feb 02 11:42:20 compute-0 NetworkManager[49067]: <info>  [1770032540.0357] manager: (patch-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Feb 02 11:42:20 compute-0 ovn_controller[154901]: 2026-02-02T11:42:20Z|00084|binding|INFO|Releasing lport 88dfe7f1-6b98-403c-9895-7397b377226b from this chassis (sb_readonly=0)
Feb 02 11:42:20 compute-0 nova_compute[251290]: 2026-02-02 11:42:20.339 251294 DEBUG nova.compute.manager [req-1db33cba-265d-45cb-91c6-affe02718033 req-117d8db7-cab0-45f6-82ce-2d6749b0c0e0 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received event network-changed-b82b20ab-af9a-43b9-a706-bb34ec1624ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:42:20 compute-0 nova_compute[251290]: 2026-02-02 11:42:20.340 251294 DEBUG nova.compute.manager [req-1db33cba-265d-45cb-91c6-affe02718033 req-117d8db7-cab0-45f6-82ce-2d6749b0c0e0 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Refreshing instance network info cache due to event network-changed-b82b20ab-af9a-43b9-a706-bb34ec1624ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:42:20 compute-0 nova_compute[251290]: 2026-02-02 11:42:20.340 251294 DEBUG oslo_concurrency.lockutils [req-1db33cba-265d-45cb-91c6-affe02718033 req-117d8db7-cab0-45f6-82ce-2d6749b0c0e0 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:42:20 compute-0 nova_compute[251290]: 2026-02-02 11:42:20.340 251294 DEBUG oslo_concurrency.lockutils [req-1db33cba-265d-45cb-91c6-affe02718033 req-117d8db7-cab0-45f6-82ce-2d6749b0c0e0 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:42:20 compute-0 nova_compute[251290]: 2026-02-02 11:42:20.340 251294 DEBUG nova.network.neutron [req-1db33cba-265d-45cb-91c6-affe02718033 req-117d8db7-cab0-45f6-82ce-2d6749b0c0e0 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Refreshing network info cache for port b82b20ab-af9a-43b9-a706-bb34ec1624ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:42:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 02 11:42:20 compute-0 nova_compute[251290]: 2026-02-02 11:42:20.719 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:21.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:21 compute-0 ceph-mon[74676]: pgmap v919: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 02 11:42:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:21.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:42:22 compute-0 ceph-mon[74676]: pgmap v920: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:42:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:22.680 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:22.681 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:22.682 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:23 compute-0 nova_compute[251290]: 2026-02-02 11:42:23.197 251294 DEBUG nova.network.neutron [req-1db33cba-265d-45cb-91c6-affe02718033 req-117d8db7-cab0-45f6-82ce-2d6749b0c0e0 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Updated VIF entry in instance network info cache for port b82b20ab-af9a-43b9-a706-bb34ec1624ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:42:23 compute-0 nova_compute[251290]: 2026-02-02 11:42:23.197 251294 DEBUG nova.network.neutron [req-1db33cba-265d-45cb-91c6-affe02718033 req-117d8db7-cab0-45f6-82ce-2d6749b0c0e0 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Updating instance_info_cache with network_info: [{"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:42:23 compute-0 nova_compute[251290]: 2026-02-02 11:42:23.223 251294 DEBUG oslo_concurrency.lockutils [req-1db33cba-265d-45cb-91c6-affe02718033 req-117d8db7-cab0-45f6-82ce-2d6749b0c0e0 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:42:23 compute-0 nova_compute[251290]: 2026-02-02 11:42:23.438 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:42:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:23.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:42:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 02 11:42:25 compute-0 ceph-mon[74676]: pgmap v921: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 02 11:42:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:25.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:25.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:25 compute-0 nova_compute[251290]: 2026-02-02 11:42:25.721 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Feb 02 11:42:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:26 compute-0 sudo[268973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:42:26 compute-0 sudo[268973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:26 compute-0 sudo[268973]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:26] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Feb 02 11:42:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:26] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Feb 02 11:42:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:27.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:42:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:27.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:42:27 compute-0 ceph-mon[74676]: pgmap v922: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Feb 02 11:42:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:27.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:28 compute-0 nova_compute[251290]: 2026-02-02 11:42:28.441 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Feb 02 11:42:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:28.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:42:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:28.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:42:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:28.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:28 compute-0 ovn_controller[154901]: 2026-02-02T11:42:28Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:77:3f 10.100.0.5
Feb 02 11:42:28 compute-0 ovn_controller[154901]: 2026-02-02T11:42:28Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:77:3f 10.100.0.5
Feb 02 11:42:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:42:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:42:29 compute-0 ceph-mon[74676]: pgmap v923: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:42:29
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.nfs', 'vms', 'default.rgw.control', 'images']
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:42:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:42:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:42:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:29.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:42:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:42:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Feb 02 11:42:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:42:30 compute-0 nova_compute[251290]: 2026-02-02 11:42:30.723 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:31.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:31 compute-0 ceph-mon[74676]: pgmap v924: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Feb 02 11:42:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:42:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:31.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:42:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Feb 02 11:42:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3664523784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:33 compute-0 nova_compute[251290]: 2026-02-02 11:42:33.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:42:33 compute-0 nova_compute[251290]: 2026-02-02 11:42:33.444 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:33 compute-0 ceph-mon[74676]: pgmap v925: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Feb 02 11:42:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2936891271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:33.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:34 compute-0 nova_compute[251290]: 2026-02-02 11:42:34.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:42:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:42:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1982096869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:34 compute-0 nova_compute[251290]: 2026-02-02 11:42:34.859 251294 INFO nova.compute.manager [None req-2a143195-ded5-409e-8d4d-6f98ca5af5cc abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Get console output
Feb 02 11:42:34 compute-0 nova_compute[251290]: 2026-02-02 11:42:34.867 258588 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.051 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.052 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.052 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.052 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.053 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:42:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:35.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:42:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832371369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.575 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:42:35 compute-0 ceph-mon[74676]: pgmap v926: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 11:42:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3487518043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/832371369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.651 251294 DEBUG nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.652 251294 DEBUG nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 11:42:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:35.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.726 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.855 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.856 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4373MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.857 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.857 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.950 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Instance 0f8aafba-a457-4636-9680-282892d6ab4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.951 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:42:35 compute-0 nova_compute[251290]: 2026-02-02 11:42:35.951 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:42:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:36 compute-0 nova_compute[251290]: 2026-02-02 11:42:36.048 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:42:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:42:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:42:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717719050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:36 compute-0 nova_compute[251290]: 2026-02-02 11:42:36.540 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:42:36 compute-0 nova_compute[251290]: 2026-02-02 11:42:36.545 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:42:36 compute-0 nova_compute[251290]: 2026-02-02 11:42:36.563 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:42:36 compute-0 nova_compute[251290]: 2026-02-02 11:42:36.602 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:42:36 compute-0 nova_compute[251290]: 2026-02-02 11:42:36.602 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:36 compute-0 ceph-mon[74676]: pgmap v927: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:42:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/717719050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:36] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Feb 02 11:42:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:36] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Feb 02 11:42:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:37.167Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:37 compute-0 ovn_controller[154901]: 2026-02-02T11:42:37Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:77:3f 10.100.0.5
Feb 02 11:42:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:37 compute-0 nova_compute[251290]: 2026-02-02 11:42:37.604 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:42:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:37.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:38 compute-0 nova_compute[251290]: 2026-02-02 11:42:38.014 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:42:38 compute-0 nova_compute[251290]: 2026-02-02 11:42:38.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:42:38 compute-0 nova_compute[251290]: 2026-02-02 11:42:38.446 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:42:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:38.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:39 compute-0 nova_compute[251290]: 2026-02-02 11:42:39.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:42:39 compute-0 nova_compute[251290]: 2026-02-02 11:42:39.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:42:39 compute-0 nova_compute[251290]: 2026-02-02 11:42:39.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:42:39 compute-0 sudo[269057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:42:39 compute-0 sudo[269057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:39 compute-0 sudo[269057]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:39 compute-0 sudo[269082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:42:39 compute-0 sudo[269082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:39.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:39 compute-0 ceph-mon[74676]: pgmap v928: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 02 11:42:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:39.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:39 compute-0 sudo[269082]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.483 251294 DEBUG nova.compute.manager [req-9d1b77b6-b6bb-4be4-af9e-5d09317c038c req-dc32a18f-9705-47f7-a536-6fb0d7527770 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received event network-changed-b82b20ab-af9a-43b9-a706-bb34ec1624ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.484 251294 DEBUG nova.compute.manager [req-9d1b77b6-b6bb-4be4-af9e-5d09317c038c req-dc32a18f-9705-47f7-a536-6fb0d7527770 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Refreshing instance network info cache due to event network-changed-b82b20ab-af9a-43b9-a706-bb34ec1624ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.484 251294 DEBUG oslo_concurrency.lockutils [req-9d1b77b6-b6bb-4be4-af9e-5d09317c038c req-dc32a18f-9705-47f7-a536-6fb0d7527770 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.484 251294 DEBUG oslo_concurrency.lockutils [req-9d1b77b6-b6bb-4be4-af9e-5d09317c038c req-dc32a18f-9705-47f7-a536-6fb0d7527770 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.484 251294 DEBUG nova.network.neutron [req-9d1b77b6-b6bb-4be4-af9e-5d09317c038c req-dc32a18f-9705-47f7-a536-6fb0d7527770 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Refreshing network info cache for port b82b20ab-af9a-43b9-a706-bb34ec1624ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:42:40 compute-0 ovn_controller[154901]: 2026-02-02T11:42:40Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:77:3f 10.100.0.5
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.645 251294 DEBUG oslo_concurrency.lockutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "0f8aafba-a457-4636-9680-282892d6ab4a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.646 251294 DEBUG oslo_concurrency.lockutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.646 251294 DEBUG oslo_concurrency.lockutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.646 251294 DEBUG oslo_concurrency.lockutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.646 251294 DEBUG oslo_concurrency.lockutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.648 251294 INFO nova.compute.manager [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Terminating instance
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.649 251294 DEBUG nova.compute.manager [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 11:42:40 compute-0 nova_compute[251290]: 2026-02-02 11:42:40.730 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.090 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.091 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:42:41 compute-0 kernel: tapb82b20ab-af (unregistering): left promiscuous mode
Feb 02 11:42:41 compute-0 NetworkManager[49067]: <info>  [1770032561.1031] device (tapb82b20ab-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.102 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:41 compute-0 ovn_controller[154901]: 2026-02-02T11:42:41Z|00085|binding|INFO|Releasing lport b82b20ab-af9a-43b9-a706-bb34ec1624ce from this chassis (sb_readonly=0)
Feb 02 11:42:41 compute-0 ovn_controller[154901]: 2026-02-02T11:42:41Z|00086|binding|INFO|Setting lport b82b20ab-af9a-43b9-a706-bb34ec1624ce down in Southbound
Feb 02 11:42:41 compute-0 ovn_controller[154901]: 2026-02-02T11:42:41Z|00087|binding|INFO|Removing iface tapb82b20ab-af ovn-installed in OVS
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.114 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.120 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:41 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 02 11:42:41 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 13.390s CPU time.
Feb 02 11:42:41 compute-0 systemd-machined[218018]: Machine qemu-4-instance-0000000a terminated.
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.174 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:77:3f 10.100.0.5'], port_security=['fa:16:3e:53:77:3f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0f8aafba-a457-4636-9680-282892d6ab4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07ef7365-e94e-44a9-9670-d305a9339f4f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c5f2d68-54aa-4a79-9d3d-e6c7b420c784', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efa3babe-77cb-4d79-b297-03518568f04f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=b82b20ab-af9a-43b9-a706-bb34ec1624ce) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.175 165304 INFO neutron.agent.ovn.metadata.agent [-] Port b82b20ab-af9a-43b9-a706-bb34ec1624ce in datapath 07ef7365-e94e-44a9-9670-d305a9339f4f unbound from our chassis
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.177 165304 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 07ef7365-e94e-44a9-9670-d305a9339f4f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.178 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[01977107-989a-4648-94ab-f0ba965ef25d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.179 165304 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f namespace which is not needed anymore
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.271 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.276 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.289 251294 INFO nova.virt.libvirt.driver [-] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Instance destroyed successfully.
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.290 251294 DEBUG nova.objects.instance [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'resources' on Instance uuid 0f8aafba-a457-4636-9680-282892d6ab4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.327 251294 DEBUG nova.virt.libvirt.vif [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:42:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-90669998',display_name='tempest-TestNetworkBasicOps-server-90669998',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-90669998',id=10,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH7AYwk3S4KClmPtHKdjl+uWO9slW5bEgBmgkhf3Zf5ReJTfs28+H0FVQNO5XH303KjC3bOfUKkNsAk7HeguQTQsxi/gMlL6eFuYT31TuapZa7PnX5MQsq3SNyrRbxhLvQ==',key_name='tempest-TestNetworkBasicOps-1560896376',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:42:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-cusfi5f1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:42:16Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=0f8aafba-a457-4636-9680-282892d6ab4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.327 251294 DEBUG nova.network.os_vif_util [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.328 251294 DEBUG nova.network.os_vif_util [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:77:3f,bridge_name='br-int',has_traffic_filtering=True,id=b82b20ab-af9a-43b9-a706-bb34ec1624ce,network=Network(07ef7365-e94e-44a9-9670-d305a9339f4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82b20ab-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.328 251294 DEBUG os_vif [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:77:3f,bridge_name='br-int',has_traffic_filtering=True,id=b82b20ab-af9a-43b9-a706-bb34ec1624ce,network=Network(07ef7365-e94e-44a9-9670-d305a9339f4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82b20ab-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.330 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.330 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb82b20ab-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.331 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.333 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:42:41 compute-0 neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f[268902]: [NOTICE]   (268906) : haproxy version is 2.8.14-c23fe91
Feb 02 11:42:41 compute-0 neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f[268902]: [NOTICE]   (268906) : path to executable is /usr/sbin/haproxy
Feb 02 11:42:41 compute-0 neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f[268902]: [WARNING]  (268906) : Exiting Master process...
Feb 02 11:42:41 compute-0 neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f[268902]: [WARNING]  (268906) : Exiting Master process...
Feb 02 11:42:41 compute-0 neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f[268902]: [ALERT]    (268906) : Current worker (268908) exited with code 143 (Terminated)
Feb 02 11:42:41 compute-0 neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f[268902]: [WARNING]  (268906) : All workers exited. Exiting... (0)
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.337 251294 INFO os_vif [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:77:3f,bridge_name='br-int',has_traffic_filtering=True,id=b82b20ab-af9a-43b9-a706-bb34ec1624ce,network=Network(07ef7365-e94e-44a9-9670-d305a9339f4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82b20ab-af')
Feb 02 11:42:41 compute-0 systemd[1]: libpod-ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b.scope: Deactivated successfully.
Feb 02 11:42:41 compute-0 podman[269164]: 2026-02-02 11:42:41.346622332 +0000 UTC m=+0.086045298 container died ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 02 11:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b-userdata-shm.mount: Deactivated successfully.
Feb 02 11:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d22308ff1fd0ab39fd3bbe6c04e8dfe54944935fff9690779845ac0499534a4f-merged.mount: Deactivated successfully.
Feb 02 11:42:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:41 compute-0 podman[269164]: 2026-02-02 11:42:41.642165603 +0000 UTC m=+0.381588559 container cleanup ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:42:41 compute-0 systemd[1]: libpod-conmon-ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b.scope: Deactivated successfully.
Feb 02 11:42:41 compute-0 ceph-mon[74676]: pgmap v929: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 02 11:42:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:41.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.738 251294 DEBUG nova.network.neutron [req-9d1b77b6-b6bb-4be4-af9e-5d09317c038c req-dc32a18f-9705-47f7-a536-6fb0d7527770 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Updated VIF entry in instance network info cache for port b82b20ab-af9a-43b9-a706-bb34ec1624ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.739 251294 DEBUG nova.network.neutron [req-9d1b77b6-b6bb-4be4-af9e-5d09317c038c req-dc32a18f-9705-47f7-a536-6fb0d7527770 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Updating instance_info_cache with network_info: [{"id": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "address": "fa:16:3e:53:77:3f", "network": {"id": "07ef7365-e94e-44a9-9670-d305a9339f4f", "bridge": "br-int", "label": "tempest-network-smoke--1144175326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82b20ab-af", "ovs_interfaceid": "b82b20ab-af9a-43b9-a706-bb34ec1624ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:42:41 compute-0 podman[269223]: 2026-02-02 11:42:41.753248588 +0000 UTC m=+0.091345600 container remove ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 11:42:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.759 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[21997bb1-4dca-4756-aee9-35f359bfb7fd]: (4, ('Mon Feb  2 11:42:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f (ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b)\nac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b\nMon Feb  2 11:42:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f (ac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b)\nac8fd6381c808df596e2fb5ed53b9aa0c5a4d57cbc7d850a7da131943151215b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.761 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[b430840c-8604-4ca3-96cc-c3ccf4352b3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.763 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07ef7365-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:42:41 compute-0 kernel: tap07ef7365-e0: left promiscuous mode
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.766 251294 DEBUG oslo_concurrency.lockutils [req-9d1b77b6-b6bb-4be4-af9e-5d09317c038c req-dc32a18f-9705-47f7-a536-6fb0d7527770 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-0f8aafba-a457-4636-9680-282892d6ab4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.767 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:41 compute-0 nova_compute[251290]: 2026-02-02 11:42:41.771 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.774 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab4068b-7abb-4e35-a889-856fc559ee06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.791 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[a787f257-7c78-4ace-9949-ed3f0943f3db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.793 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[81ffaf27-be8c-49e2-9dea-911df2cc5f4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.805 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[4351cc0c-1ae1-4f1e-a75f-07c0173a57e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418923, 'reachable_time': 18022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269239, 'error': None, 'target': 'ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.808 165875 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-07ef7365-e94e-44a9-9670-d305a9339f4f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 11:42:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d07ef7365\x2de94e\x2d44a9\x2d9670\x2dd305a9339f4f.mount: Deactivated successfully.
Feb 02 11:42:41 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:41.808 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[cc30a53f-ac3b-443d-b2db-1580594da67b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.132 251294 INFO nova.virt.libvirt.driver [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Deleting instance files /var/lib/nova/instances/0f8aafba-a457-4636-9680-282892d6ab4a_del
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.133 251294 INFO nova.virt.libvirt.driver [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Deletion of /var/lib/nova/instances/0f8aafba-a457-4636-9680-282892d6ab4a_del complete
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.213 251294 INFO nova.compute.manager [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Took 1.56 seconds to destroy the instance on the hypervisor.
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.214 251294 DEBUG oslo.service.loopingcall [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.214 251294 DEBUG nova.compute.manager [-] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.214 251294 DEBUG nova.network.neutron [-] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 11:42:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.555 251294 DEBUG nova.compute.manager [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received event network-vif-unplugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.555 251294 DEBUG oslo_concurrency.lockutils [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.556 251294 DEBUG oslo_concurrency.lockutils [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.556 251294 DEBUG oslo_concurrency.lockutils [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.556 251294 DEBUG nova.compute.manager [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] No waiting events found dispatching network-vif-unplugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.556 251294 DEBUG nova.compute.manager [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received event network-vif-unplugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.556 251294 DEBUG nova.compute.manager [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received event network-vif-plugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.557 251294 DEBUG oslo_concurrency.lockutils [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.557 251294 DEBUG oslo_concurrency.lockutils [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.557 251294 DEBUG oslo_concurrency.lockutils [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.557 251294 DEBUG nova.compute.manager [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] No waiting events found dispatching network-vif-plugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:42:42 compute-0 nova_compute[251290]: 2026-02-02 11:42:42.557 251294 WARNING nova.compute.manager [req-3935e494-4aaf-432e-9fc1-4710b6b8fca7 req-64dbd7a4-0731-45d2-8659-f476dd39b7fb 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received unexpected event network-vif-plugged-b82b20ab-af9a-43b9-a706-bb34ec1624ce for instance with vm_state active and task_state deleting.
Feb 02 11:42:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:42:42 compute-0 ceph-mon[74676]: pgmap v930: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Feb 02 11:42:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:42:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:42:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:42:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:42:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:42:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 21 KiB/s wr, 2 op/s
Feb 02 11:42:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 26 KiB/s wr, 3 op/s
Feb 02 11:42:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:42:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:42:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:42:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:42:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:42:42 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:42:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:42:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:42:42 compute-0 sudo[269241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:42:42 compute-0 sudo[269241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:42 compute-0 sudo[269241]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:42 compute-0 sudo[269266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:42:42 compute-0 sudo[269266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:43 compute-0 podman[269329]: 2026-02-02 11:42:43.284151471 +0000 UTC m=+0.045745092 container create 5c8b66ce08bbc66a75d0a87dddef9c8cb39dfda340d6775d24939196d38b765d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:42:43 compute-0 systemd[1]: Started libpod-conmon-5c8b66ce08bbc66a75d0a87dddef9c8cb39dfda340d6775d24939196d38b765d.scope.
Feb 02 11:42:43 compute-0 podman[269329]: 2026-02-02 11:42:43.26142559 +0000 UTC m=+0.023019231 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:42:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:42:43 compute-0 podman[269329]: 2026-02-02 11:42:43.380681578 +0000 UTC m=+0.142275219 container init 5c8b66ce08bbc66a75d0a87dddef9c8cb39dfda340d6775d24939196d38b765d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:42:43 compute-0 podman[269329]: 2026-02-02 11:42:43.388838472 +0000 UTC m=+0.150432093 container start 5c8b66ce08bbc66a75d0a87dddef9c8cb39dfda340d6775d24939196d38b765d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_newton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:42:43 compute-0 podman[269329]: 2026-02-02 11:42:43.392499697 +0000 UTC m=+0.154093318 container attach 5c8b66ce08bbc66a75d0a87dddef9c8cb39dfda340d6775d24939196d38b765d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:42:43 compute-0 relaxed_newton[269345]: 167 167
Feb 02 11:42:43 compute-0 systemd[1]: libpod-5c8b66ce08bbc66a75d0a87dddef9c8cb39dfda340d6775d24939196d38b765d.scope: Deactivated successfully.
Feb 02 11:42:43 compute-0 podman[269329]: 2026-02-02 11:42:43.396660546 +0000 UTC m=+0.158254167 container died 5c8b66ce08bbc66a75d0a87dddef9c8cb39dfda340d6775d24939196d38b765d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a0a2af8a9258a4172ecbd901ffaeffdcdded94cdfbc7e0fb25270f1af79835a-merged.mount: Deactivated successfully.
Feb 02 11:42:43 compute-0 podman[269329]: 2026-02-02 11:42:43.432056401 +0000 UTC m=+0.193650022 container remove 5c8b66ce08bbc66a75d0a87dddef9c8cb39dfda340d6775d24939196d38b765d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_newton, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:42:43 compute-0 systemd[1]: libpod-conmon-5c8b66ce08bbc66a75d0a87dddef9c8cb39dfda340d6775d24939196d38b765d.scope: Deactivated successfully.
Feb 02 11:42:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:43.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:43 compute-0 podman[269369]: 2026-02-02 11:42:43.569722177 +0000 UTC m=+0.041307455 container create 43a9bfbe6974c6baed9b4fd769657f4330f58dd25c96e5c5964bfe296b2af674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hoover, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:42:43 compute-0 systemd[1]: Started libpod-conmon-43a9bfbe6974c6baed9b4fd769657f4330f58dd25c96e5c5964bfe296b2af674.scope.
Feb 02 11:42:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e567fce5cfd4310e6e9db8a9c63e23408b0cabe1712a571290a42437b93e59d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e567fce5cfd4310e6e9db8a9c63e23408b0cabe1712a571290a42437b93e59d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e567fce5cfd4310e6e9db8a9c63e23408b0cabe1712a571290a42437b93e59d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e567fce5cfd4310e6e9db8a9c63e23408b0cabe1712a571290a42437b93e59d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e567fce5cfd4310e6e9db8a9c63e23408b0cabe1712a571290a42437b93e59d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:43 compute-0 podman[269369]: 2026-02-02 11:42:43.552503743 +0000 UTC m=+0.024089051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:42:43 compute-0 podman[269369]: 2026-02-02 11:42:43.653611562 +0000 UTC m=+0.125196870 container init 43a9bfbe6974c6baed9b4fd769657f4330f58dd25c96e5c5964bfe296b2af674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:42:43 compute-0 podman[269369]: 2026-02-02 11:42:43.659507061 +0000 UTC m=+0.131092349 container start 43a9bfbe6974c6baed9b4fd769657f4330f58dd25c96e5c5964bfe296b2af674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Feb 02 11:42:43 compute-0 podman[269369]: 2026-02-02 11:42:43.664146654 +0000 UTC m=+0.135731962 container attach 43a9bfbe6974c6baed9b4fd769657f4330f58dd25c96e5c5964bfe296b2af674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:42:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:43.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:42:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:42:43 compute-0 ceph-mon[74676]: pgmap v931: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 21 KiB/s wr, 2 op/s
Feb 02 11:42:43 compute-0 ceph-mon[74676]: pgmap v932: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 26 KiB/s wr, 3 op/s
Feb 02 11:42:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:42:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:42:43 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:42:43 compute-0 nova_compute[251290]: 2026-02-02 11:42:43.894 251294 DEBUG nova.network.neutron [-] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:42:43 compute-0 nova_compute[251290]: 2026-02-02 11:42:43.916 251294 INFO nova.compute.manager [-] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Took 1.70 seconds to deallocate network for instance.
Feb 02 11:42:43 compute-0 nova_compute[251290]: 2026-02-02 11:42:43.969 251294 DEBUG nova.compute.manager [req-dbda8dfa-5534-4948-bfcc-f130859626d4 req-c16581a7-bf3a-4362-b9fd-0d74d0c8bb6c 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Received event network-vif-deleted-b82b20ab-af9a-43b9-a706-bb34ec1624ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:42:43 compute-0 nova_compute[251290]: 2026-02-02 11:42:43.979 251294 DEBUG oslo_concurrency.lockutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:43 compute-0 nova_compute[251290]: 2026-02-02 11:42:43.980 251294 DEBUG oslo_concurrency.lockutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:43 compute-0 dreamy_hoover[269385]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:42:43 compute-0 dreamy_hoover[269385]: --> All data devices are unavailable
Feb 02 11:42:44 compute-0 systemd[1]: libpod-43a9bfbe6974c6baed9b4fd769657f4330f58dd25c96e5c5964bfe296b2af674.scope: Deactivated successfully.
Feb 02 11:42:44 compute-0 podman[269369]: 2026-02-02 11:42:44.025650925 +0000 UTC m=+0.497236233 container died 43a9bfbe6974c6baed9b4fd769657f4330f58dd25c96e5c5964bfe296b2af674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hoover, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:42:44 compute-0 nova_compute[251290]: 2026-02-02 11:42:44.031 251294 DEBUG oslo_concurrency.processutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e567fce5cfd4310e6e9db8a9c63e23408b0cabe1712a571290a42437b93e59d7-merged.mount: Deactivated successfully.
Feb 02 11:42:44 compute-0 podman[269369]: 2026-02-02 11:42:44.068579976 +0000 UTC m=+0.540165264 container remove 43a9bfbe6974c6baed9b4fd769657f4330f58dd25c96e5c5964bfe296b2af674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb 02 11:42:44 compute-0 systemd[1]: libpod-conmon-43a9bfbe6974c6baed9b4fd769657f4330f58dd25c96e5c5964bfe296b2af674.scope: Deactivated successfully.
Feb 02 11:42:44 compute-0 sudo[269266]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:44 compute-0 sudo[269413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:42:44 compute-0 sudo[269413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:44 compute-0 sudo[269413]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:44 compute-0 sudo[269457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:42:44 compute-0 sudo[269457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:42:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4078339666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:44 compute-0 nova_compute[251290]: 2026-02-02 11:42:44.523 251294 DEBUG oslo_concurrency.processutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:42:44 compute-0 nova_compute[251290]: 2026-02-02 11:42:44.531 251294 DEBUG nova.compute.provider_tree [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:42:44 compute-0 nova_compute[251290]: 2026-02-02 11:42:44.551 251294 DEBUG nova.scheduler.client.report [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:42:44 compute-0 nova_compute[251290]: 2026-02-02 11:42:44.581 251294 DEBUG oslo_concurrency.lockutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:42:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:42:44 compute-0 nova_compute[251290]: 2026-02-02 11:42:44.616 251294 INFO nova.scheduler.client.report [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Deleted allocations for instance 0f8aafba-a457-4636-9680-282892d6ab4a
Feb 02 11:42:44 compute-0 podman[269524]: 2026-02-02 11:42:44.684045048 +0000 UTC m=+0.045971419 container create deac7c59db6d115f65b4bd60befe92c998f7e2913a07eceea7cc8bd5084d072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:42:44 compute-0 nova_compute[251290]: 2026-02-02 11:42:44.686 251294 DEBUG oslo_concurrency.lockutils [None req-f30a671c-63f3-4775-a39c-c2b7823c47ef abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "0f8aafba-a457-4636-9680-282892d6ab4a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:42:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 868 B/s rd, 7.8 KiB/s wr, 1 op/s
Feb 02 11:42:44 compute-0 systemd[1]: Started libpod-conmon-deac7c59db6d115f65b4bd60befe92c998f7e2913a07eceea7cc8bd5084d072f.scope.
Feb 02 11:42:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1151728171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:42:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1151728171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:42:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4078339666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:42:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:42:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:42:44 compute-0 podman[269524]: 2026-02-02 11:42:44.662640474 +0000 UTC m=+0.024566875 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:42:44 compute-0 podman[269524]: 2026-02-02 11:42:44.76750434 +0000 UTC m=+0.129430741 container init deac7c59db6d115f65b4bd60befe92c998f7e2913a07eceea7cc8bd5084d072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Feb 02 11:42:44 compute-0 podman[269524]: 2026-02-02 11:42:44.774397268 +0000 UTC m=+0.136323639 container start deac7c59db6d115f65b4bd60befe92c998f7e2913a07eceea7cc8bd5084d072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:42:44 compute-0 objective_turing[269540]: 167 167
Feb 02 11:42:44 compute-0 systemd[1]: libpod-deac7c59db6d115f65b4bd60befe92c998f7e2913a07eceea7cc8bd5084d072f.scope: Deactivated successfully.
Feb 02 11:42:44 compute-0 podman[269524]: 2026-02-02 11:42:44.781143641 +0000 UTC m=+0.143070022 container attach deac7c59db6d115f65b4bd60befe92c998f7e2913a07eceea7cc8bd5084d072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_turing, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:42:44 compute-0 podman[269524]: 2026-02-02 11:42:44.782459219 +0000 UTC m=+0.144385590 container died deac7c59db6d115f65b4bd60befe92c998f7e2913a07eceea7cc8bd5084d072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_turing, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 11:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6110bc0ac4f37046509ef20359b56b4bb9c71bebfefdff30069f9d7ec319b446-merged.mount: Deactivated successfully.
Feb 02 11:42:44 compute-0 podman[269524]: 2026-02-02 11:42:44.841853012 +0000 UTC m=+0.203779383 container remove deac7c59db6d115f65b4bd60befe92c998f7e2913a07eceea7cc8bd5084d072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_turing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:42:44 compute-0 systemd[1]: libpod-conmon-deac7c59db6d115f65b4bd60befe92c998f7e2913a07eceea7cc8bd5084d072f.scope: Deactivated successfully.
Feb 02 11:42:44 compute-0 podman[269565]: 2026-02-02 11:42:44.989240196 +0000 UTC m=+0.046397341 container create 13d6b37df9e080c3572af2f7d70fdd65f4aff93c02622a6fa3b379d764bfde2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:42:45 compute-0 systemd[1]: Started libpod-conmon-13d6b37df9e080c3572af2f7d70fdd65f4aff93c02622a6fa3b379d764bfde2f.scope.
Feb 02 11:42:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260eaa189b2bc61a5689f40dbe0108d6f92d445a0fc68561783f98d49896a8ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260eaa189b2bc61a5689f40dbe0108d6f92d445a0fc68561783f98d49896a8ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260eaa189b2bc61a5689f40dbe0108d6f92d445a0fc68561783f98d49896a8ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260eaa189b2bc61a5689f40dbe0108d6f92d445a0fc68561783f98d49896a8ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:45 compute-0 podman[269565]: 2026-02-02 11:42:44.969410278 +0000 UTC m=+0.026567443 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:42:45 compute-0 podman[269565]: 2026-02-02 11:42:45.081625505 +0000 UTC m=+0.138782650 container init 13d6b37df9e080c3572af2f7d70fdd65f4aff93c02622a6fa3b379d764bfde2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:42:45 compute-0 podman[269565]: 2026-02-02 11:42:45.086860225 +0000 UTC m=+0.144017370 container start 13d6b37df9e080c3572af2f7d70fdd65f4aff93c02622a6fa3b379d764bfde2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_banzai, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:42:45 compute-0 podman[269565]: 2026-02-02 11:42:45.090637503 +0000 UTC m=+0.147794738 container attach 13d6b37df9e080c3572af2f7d70fdd65f4aff93c02622a6fa3b379d764bfde2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 11:42:45 compute-0 elated_banzai[269582]: {
Feb 02 11:42:45 compute-0 elated_banzai[269582]:     "1": [
Feb 02 11:42:45 compute-0 elated_banzai[269582]:         {
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "devices": [
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "/dev/loop3"
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             ],
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "lv_name": "ceph_lv0",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "lv_size": "21470642176",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "name": "ceph_lv0",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "tags": {
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.cluster_name": "ceph",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.crush_device_class": "",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.encrypted": "0",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.osd_id": "1",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.type": "block",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.vdo": "0",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:                 "ceph.with_tpm": "0"
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             },
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "type": "block",
Feb 02 11:42:45 compute-0 elated_banzai[269582]:             "vg_name": "ceph_vg0"
Feb 02 11:42:45 compute-0 elated_banzai[269582]:         }
Feb 02 11:42:45 compute-0 elated_banzai[269582]:     ]
Feb 02 11:42:45 compute-0 elated_banzai[269582]: }
Feb 02 11:42:45 compute-0 systemd[1]: libpod-13d6b37df9e080c3572af2f7d70fdd65f4aff93c02622a6fa3b379d764bfde2f.scope: Deactivated successfully.
Feb 02 11:42:45 compute-0 podman[269565]: 2026-02-02 11:42:45.382333514 +0000 UTC m=+0.439490659 container died 13d6b37df9e080c3572af2f7d70fdd65f4aff93c02622a6fa3b379d764bfde2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 11:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-260eaa189b2bc61a5689f40dbe0108d6f92d445a0fc68561783f98d49896a8ab-merged.mount: Deactivated successfully.
Feb 02 11:42:45 compute-0 podman[269565]: 2026-02-02 11:42:45.428699053 +0000 UTC m=+0.485856198 container remove 13d6b37df9e080c3572af2f7d70fdd65f4aff93c02622a6fa3b379d764bfde2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 02 11:42:45 compute-0 systemd[1]: libpod-conmon-13d6b37df9e080c3572af2f7d70fdd65f4aff93c02622a6fa3b379d764bfde2f.scope: Deactivated successfully.
Feb 02 11:42:45 compute-0 sudo[269457]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:45.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:45 compute-0 sudo[269602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:42:45 compute-0 sudo[269602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:45 compute-0 sudo[269602]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:45 compute-0 sudo[269627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:42:45 compute-0 sudo[269627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:45.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:45 compute-0 nova_compute[251290]: 2026-02-02 11:42:45.731 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:45 compute-0 ceph-mon[74676]: pgmap v933: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 868 B/s rd, 7.8 KiB/s wr, 1 op/s
Feb 02 11:42:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:46 compute-0 podman[269692]: 2026-02-02 11:42:46.028658231 +0000 UTC m=+0.053560066 container create ba23e718ba25c7841d3b1c13e1da1e4f5d999d45db74251256f2bb9321962121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:42:46 compute-0 systemd[1]: Started libpod-conmon-ba23e718ba25c7841d3b1c13e1da1e4f5d999d45db74251256f2bb9321962121.scope.
Feb 02 11:42:46 compute-0 podman[269692]: 2026-02-02 11:42:45.999291959 +0000 UTC m=+0.024193814 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:42:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:42:46 compute-0 podman[269692]: 2026-02-02 11:42:46.118241969 +0000 UTC m=+0.143143804 container init ba23e718ba25c7841d3b1c13e1da1e4f5d999d45db74251256f2bb9321962121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:42:46 compute-0 podman[269692]: 2026-02-02 11:42:46.126324151 +0000 UTC m=+0.151225986 container start ba23e718ba25c7841d3b1c13e1da1e4f5d999d45db74251256f2bb9321962121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:42:46 compute-0 stoic_sutherland[269708]: 167 167
Feb 02 11:42:46 compute-0 systemd[1]: libpod-ba23e718ba25c7841d3b1c13e1da1e4f5d999d45db74251256f2bb9321962121.scope: Deactivated successfully.
Feb 02 11:42:46 compute-0 podman[269692]: 2026-02-02 11:42:46.131791067 +0000 UTC m=+0.156692902 container attach ba23e718ba25c7841d3b1c13e1da1e4f5d999d45db74251256f2bb9321962121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_sutherland, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 11:42:46 compute-0 podman[269692]: 2026-02-02 11:42:46.132199469 +0000 UTC m=+0.157101304 container died ba23e718ba25c7841d3b1c13e1da1e4f5d999d45db74251256f2bb9321962121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_sutherland, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdb4c87ec1d9101204a39334a2de6b1dc21b0ed85fc82d95516efef8baeca442-merged.mount: Deactivated successfully.
Feb 02 11:42:46 compute-0 podman[269692]: 2026-02-02 11:42:46.182715597 +0000 UTC m=+0.207617432 container remove ba23e718ba25c7841d3b1c13e1da1e4f5d999d45db74251256f2bb9321962121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:42:46 compute-0 systemd[1]: libpod-conmon-ba23e718ba25c7841d3b1c13e1da1e4f5d999d45db74251256f2bb9321962121.scope: Deactivated successfully.
Feb 02 11:42:46 compute-0 podman[269733]: 2026-02-02 11:42:46.332986515 +0000 UTC m=+0.049271264 container create cb3abfb88a88fd1813a6a5b3a62c544e3cc7a246642abea10828bcc57171715e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_rosalind, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:42:46 compute-0 nova_compute[251290]: 2026-02-02 11:42:46.333 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:46 compute-0 systemd[1]: Started libpod-conmon-cb3abfb88a88fd1813a6a5b3a62c544e3cc7a246642abea10828bcc57171715e.scope.
Feb 02 11:42:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb7ce85900bb941ab005594da80fb6220c1f06711844d7a67ff35ff48c2f45b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb7ce85900bb941ab005594da80fb6220c1f06711844d7a67ff35ff48c2f45b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb7ce85900bb941ab005594da80fb6220c1f06711844d7a67ff35ff48c2f45b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb7ce85900bb941ab005594da80fb6220c1f06711844d7a67ff35ff48c2f45b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:42:46 compute-0 podman[269733]: 2026-02-02 11:42:46.311375425 +0000 UTC m=+0.027660174 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:42:46 compute-0 podman[269733]: 2026-02-02 11:42:46.427956577 +0000 UTC m=+0.144241346 container init cb3abfb88a88fd1813a6a5b3a62c544e3cc7a246642abea10828bcc57171715e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:42:46 compute-0 podman[269733]: 2026-02-02 11:42:46.434264758 +0000 UTC m=+0.150549507 container start cb3abfb88a88fd1813a6a5b3a62c544e3cc7a246642abea10828bcc57171715e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_rosalind, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:42:46 compute-0 podman[269733]: 2026-02-02 11:42:46.439543959 +0000 UTC m=+0.155828738 container attach cb3abfb88a88fd1813a6a5b3a62c544e3cc7a246642abea10828bcc57171715e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb 02 11:42:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 9.5 KiB/s wr, 82 op/s
Feb 02 11:42:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:46 compute-0 sudo[269794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:42:46 compute-0 sudo[269794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:46 compute-0 sudo[269794]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:46] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:42:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:46] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:42:47 compute-0 lvm[269850]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:42:47 compute-0 lvm[269850]: VG ceph_vg0 finished
Feb 02 11:42:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:47.168Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:47 compute-0 brave_rosalind[269750]: {}
Feb 02 11:42:47 compute-0 systemd[1]: libpod-cb3abfb88a88fd1813a6a5b3a62c544e3cc7a246642abea10828bcc57171715e.scope: Deactivated successfully.
Feb 02 11:42:47 compute-0 systemd[1]: libpod-cb3abfb88a88fd1813a6a5b3a62c544e3cc7a246642abea10828bcc57171715e.scope: Consumed 1.150s CPU time.
Feb 02 11:42:47 compute-0 podman[269854]: 2026-02-02 11:42:47.277420187 +0000 UTC m=+0.029734064 container died cb3abfb88a88fd1813a6a5b3a62c544e3cc7a246642abea10828bcc57171715e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb 02 11:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cb7ce85900bb941ab005594da80fb6220c1f06711844d7a67ff35ff48c2f45b-merged.mount: Deactivated successfully.
Feb 02 11:42:47 compute-0 podman[269854]: 2026-02-02 11:42:47.329478049 +0000 UTC m=+0.081791906 container remove cb3abfb88a88fd1813a6a5b3a62c544e3cc7a246642abea10828bcc57171715e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:42:47 compute-0 systemd[1]: libpod-conmon-cb3abfb88a88fd1813a6a5b3a62c544e3cc7a246642abea10828bcc57171715e.scope: Deactivated successfully.
Feb 02 11:42:47 compute-0 sudo[269627]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:42:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:42:47 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:47 compute-0 sudo[269868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:42:47 compute-0 sudo[269868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:42:47 compute-0 sudo[269868]: pam_unix(sudo:session): session closed for user root
Feb 02 11:42:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:47.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:47.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:47 compute-0 nova_compute[251290]: 2026-02-02 11:42:47.989 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:48 compute-0 nova_compute[251290]: 2026-02-02 11:42:48.015 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:48 compute-0 ceph-mon[74676]: pgmap v934: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 9.5 KiB/s wr, 82 op/s
Feb 02 11:42:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:42:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 9.5 KiB/s wr, 82 op/s
Feb 02 11:42:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:48.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:49.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:49.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:50 compute-0 podman[269897]: 2026-02-02 11:42:50.282171228 +0000 UTC m=+0.065264242 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:42:50 compute-0 podman[269898]: 2026-02-02 11:42:50.311692484 +0000 UTC m=+0.094269253 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller)
Feb 02 11:42:50 compute-0 ceph-mon[74676]: pgmap v935: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 9.5 KiB/s wr, 82 op/s
Feb 02 11:42:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 1.7 KiB/s wr, 225 op/s
Feb 02 11:42:50 compute-0 nova_compute[251290]: 2026-02-02 11:42:50.745 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:51 compute-0 nova_compute[251290]: 2026-02-02 11:42:51.335 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:51.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:51.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:52 compute-0 ceph-mon[74676]: pgmap v936: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 1.7 KiB/s wr, 225 op/s
Feb 02 11:42:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 1.4 KiB/s wr, 226 op/s
Feb 02 11:42:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:53.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:53.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:54 compute-0 ceph-mon[74676]: pgmap v937: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 1.4 KiB/s wr, 226 op/s
Feb 02 11:42:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.2 KiB/s wr, 188 op/s
Feb 02 11:42:55 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:55.200 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:42:55 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:42:55.201 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:42:55 compute-0 nova_compute[251290]: 2026-02-02 11:42:55.203 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:55.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:55.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:55 compute-0 nova_compute[251290]: 2026-02-02 11:42:55.755 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:42:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:42:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:42:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:42:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:42:56 compute-0 nova_compute[251290]: 2026-02-02 11:42:56.289 251294 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770032561.2879398, 0f8aafba-a457-4636-9680-282892d6ab4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:42:56 compute-0 nova_compute[251290]: 2026-02-02 11:42:56.290 251294 INFO nova.compute.manager [-] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] VM Stopped (Lifecycle Event)
Feb 02 11:42:56 compute-0 nova_compute[251290]: 2026-02-02 11:42:56.312 251294 DEBUG nova.compute.manager [None req-c3cdf7f5-ace6-4d02-9966-fbfa4bc3a25e - - - - - -] [instance: 0f8aafba-a457-4636-9680-282892d6ab4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:42:56 compute-0 nova_compute[251290]: 2026-02-02 11:42:56.337 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:42:56 compute-0 ceph-mon[74676]: pgmap v938: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.2 KiB/s wr, 188 op/s
Feb 02 11:42:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.2 KiB/s wr, 189 op/s
Feb 02 11:42:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:42:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:56] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:42:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:42:56] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb 02 11:42:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:57.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:57.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:57.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:58 compute-0 ceph-mon[74676]: pgmap v939: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.2 KiB/s wr, 189 op/s
Feb 02 11:42:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 133 op/s
Feb 02 11:42:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:42:58.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:42:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:42:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:42:59.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:42:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:42:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:42:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:42:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:42:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:42:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:42:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:42:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:42:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:42:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:42:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:42:59.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:42:59 compute-0 nova_compute[251290]: 2026-02-02 11:42:59.765 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:59 compute-0 nova_compute[251290]: 2026-02-02 11:42:59.765 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:59 compute-0 nova_compute[251290]: 2026-02-02 11:42:59.797 251294 DEBUG nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 11:42:59 compute-0 nova_compute[251290]: 2026-02-02 11:42:59.880 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:42:59 compute-0 nova_compute[251290]: 2026-02-02 11:42:59.881 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:42:59 compute-0 nova_compute[251290]: 2026-02-02 11:42:59.890 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 11:42:59 compute-0 nova_compute[251290]: 2026-02-02 11:42:59.890 251294 INFO nova.compute.claims [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Claim successful on node compute-0.ctlplane.example.com
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.142 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:43:00 compute-0 ceph-mon[74676]: pgmap v940: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 133 op/s
Feb 02 11:43:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:43:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:43:00 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512276193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.631 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.638 251294 DEBUG nova.compute.provider_tree [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.679 251294 DEBUG nova.scheduler.client.report [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:43:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 133 op/s
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.722 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.723 251294 DEBUG nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.798 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.800 251294 DEBUG nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.801 251294 DEBUG nova.network.neutron [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.827 251294 INFO nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 11:43:00 compute-0 nova_compute[251290]: 2026-02-02 11:43:00.856 251294 DEBUG nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 11:43:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.081 251294 DEBUG nova.policy [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abee87546a344ef285e2e269d2c74792', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3240aa599bd249a3b72e42fcc47af557', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.094 251294 DEBUG nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.096 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.096 251294 INFO nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Creating image(s)
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.124 251294 DEBUG nova.storage.rbd_utils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 336c40ec-af53-4724-8c01-2cec821a49f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.156 251294 DEBUG nova.storage.rbd_utils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 336c40ec-af53-4724-8c01-2cec821a49f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.192 251294 DEBUG nova.storage.rbd_utils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 336c40ec-af53-4724-8c01-2cec821a49f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.197 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.251 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.252 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.253 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.253 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.288 251294 DEBUG nova.storage.rbd_utils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 336c40ec-af53-4724-8c01-2cec821a49f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.292 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 336c40ec-af53-4724-8c01-2cec821a49f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.339 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:01 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2512276193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.538 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6cd4a7aa5659e1f2c913c7e14ac0f6125c7b1297 336c40ec-af53-4724-8c01-2cec821a49f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:43:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:01.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.614 251294 DEBUG nova.storage.rbd_utils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] resizing rbd image 336c40ec-af53-4724-8c01-2cec821a49f3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 11:43:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.758 251294 DEBUG nova.objects.instance [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'migration_context' on Instance uuid 336c40ec-af53-4724-8c01-2cec821a49f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:43:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.775 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.776 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Ensure instance console log exists: /var/lib/nova/instances/336c40ec-af53-4724-8c01-2cec821a49f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.776 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.777 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:43:01 compute-0 nova_compute[251290]: 2026-02-02 11:43:01.777 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:43:02 compute-0 nova_compute[251290]: 2026-02-02 11:43:02.308 251294 DEBUG nova.network.neutron [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Successfully created port: cdbbf387-8780-4514-a372-9c5160d9e694 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 11:43:02 compute-0 ceph-mon[74676]: pgmap v941: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 133 op/s
Feb 02 11:43:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Feb 02 11:43:03 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:03.204 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:43:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:43:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:03.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:43:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:04 compute-0 ceph-mon[74676]: pgmap v942: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Feb 02 11:43:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:43:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:05.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:05 compute-0 nova_compute[251290]: 2026-02-02 11:43:05.800 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:06 compute-0 nova_compute[251290]: 2026-02-02 11:43:06.344 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:06 compute-0 nova_compute[251290]: 2026-02-02 11:43:06.396 251294 DEBUG nova.network.neutron [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Successfully updated port: cdbbf387-8780-4514-a372-9c5160d9e694 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 11:43:06 compute-0 nova_compute[251290]: 2026-02-02 11:43:06.415 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:43:06 compute-0 nova_compute[251290]: 2026-02-02 11:43:06.416 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquired lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:43:06 compute-0 nova_compute[251290]: 2026-02-02 11:43:06.416 251294 DEBUG nova.network.neutron [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 11:43:06 compute-0 nova_compute[251290]: 2026-02-02 11:43:06.512 251294 DEBUG nova.compute.manager [req-e2e883e3-00a1-4954-9c3a-3b41415ebbd3 req-ab791f89-ed97-49b2-90c4-3770a9d9ddb4 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-changed-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:43:06 compute-0 nova_compute[251290]: 2026-02-02 11:43:06.513 251294 DEBUG nova.compute.manager [req-e2e883e3-00a1-4954-9c3a-3b41415ebbd3 req-ab791f89-ed97-49b2-90c4-3770a9d9ddb4 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Refreshing instance network info cache due to event network-changed-cdbbf387-8780-4514-a372-9c5160d9e694. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:43:06 compute-0 nova_compute[251290]: 2026-02-02 11:43:06.513 251294 DEBUG oslo_concurrency.lockutils [req-e2e883e3-00a1-4954-9c3a-3b41415ebbd3 req-ab791f89-ed97-49b2-90c4-3770a9d9ddb4 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:43:06 compute-0 ceph-mon[74676]: pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:43:06 compute-0 nova_compute[251290]: 2026-02-02 11:43:06.622 251294 DEBUG nova.network.neutron [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 11:43:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:43:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:43:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:43:07 compute-0 sudo[270145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:43:07 compute-0 sudo[270145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:07 compute-0 sudo[270145]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:07.171Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:07.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:43:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:07.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:43:08 compute-0 ceph-mon[74676]: pgmap v944: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:43:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:43:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:08.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.300 251294 DEBUG nova.network.neutron [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updating instance_info_cache with network_info: [{"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.321 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Releasing lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.322 251294 DEBUG nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Instance network_info: |[{"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.322 251294 DEBUG oslo_concurrency.lockutils [req-e2e883e3-00a1-4954-9c3a-3b41415ebbd3 req-ab791f89-ed97-49b2-90c4-3770a9d9ddb4 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.322 251294 DEBUG nova.network.neutron [req-e2e883e3-00a1-4954-9c3a-3b41415ebbd3 req-ab791f89-ed97-49b2-90c4-3770a9d9ddb4 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Refreshing network info cache for port cdbbf387-8780-4514-a372-9c5160d9e694 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.325 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Start _get_guest_xml network_info=[{"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:33:57Z,direct_url=<?>,disk_format='qcow2',id=8a4b36bd-584f-4a0a-aab3-55c0b12d2d97,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='298a2ae7f4e04d87bebf3a1c7834ef26',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:34:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '8a4b36bd-584f-4a0a-aab3-55c0b12d2d97'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.330 251294 WARNING nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.337 251294 DEBUG nova.virt.libvirt.host [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.338 251294 DEBUG nova.virt.libvirt.host [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.345 251294 DEBUG nova.virt.libvirt.host [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.345 251294 DEBUG nova.virt.libvirt.host [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.346 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.346 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:33:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5413fce8-24ad-46a1-a21e-000a8299c8f6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:33:57Z,direct_url=<?>,disk_format='qcow2',id=8a4b36bd-584f-4a0a-aab3-55c0b12d2d97,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='298a2ae7f4e04d87bebf3a1c7834ef26',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:34:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.346 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.347 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.347 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.347 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.347 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.347 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.348 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.348 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.348 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.348 251294 DEBUG nova.virt.hardware [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.351 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:43:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:43:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:09.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:43:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:09 compute-0 ceph-mon[74676]: pgmap v945: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.865 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.890 251294 DEBUG nova.storage.rbd_utils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 336c40ec-af53-4724-8c01-2cec821a49f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:43:09 compute-0 nova_compute[251290]: 2026-02-02 11:43:09.894 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:43:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 11:43:10 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336338065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.394 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.396 251294 DEBUG nova.virt.libvirt.vif [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:42:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-640321398',display_name='tempest-TestNetworkBasicOps-server-640321398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-640321398',id=11,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPKrY/FdowAFVklD/wXNvtBo9XHL36YOm0NhB6ClNMLwm16hZQnK9nS2yj5JzsZjBPQ4VmPqcFH9YEh8Lga2thc5KERzORTkCZDm1xZtV4CAwrPf0SkJpTQ6TioHUEA/3g==',key_name='tempest-TestNetworkBasicOps-2123245503',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-lvum5ybr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:43:00Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=336c40ec-af53-4724-8c01-2cec821a49f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.396 251294 DEBUG nova.network.os_vif_util [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.397 251294 DEBUG nova.network.os_vif_util [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:9e:2c,bridge_name='br-int',has_traffic_filtering=True,id=cdbbf387-8780-4514-a372-9c5160d9e694,network=Network(48206d91-8044-421b-87db-54dbeb1ce4a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdbbf387-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.398 251294 DEBUG nova.objects.instance [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'pci_devices' on Instance uuid 336c40ec-af53-4724-8c01-2cec821a49f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.429 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] End _get_guest_xml xml=<domain type="kvm">
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <uuid>336c40ec-af53-4724-8c01-2cec821a49f3</uuid>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <name>instance-0000000b</name>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <memory>131072</memory>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <vcpu>1</vcpu>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <metadata>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <nova:name>tempest-TestNetworkBasicOps-server-640321398</nova:name>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <nova:creationTime>2026-02-02 11:43:09</nova:creationTime>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <nova:flavor name="m1.nano">
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <nova:memory>128</nova:memory>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <nova:disk>1</nova:disk>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <nova:swap>0</nova:swap>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <nova:vcpus>1</nova:vcpus>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       </nova:flavor>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <nova:owner>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <nova:user uuid="abee87546a344ef285e2e269d2c74792">tempest-TestNetworkBasicOps-571256976-project-member</nova:user>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <nova:project uuid="3240aa599bd249a3b72e42fcc47af557">tempest-TestNetworkBasicOps-571256976</nova:project>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       </nova:owner>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <nova:root type="image" uuid="8a4b36bd-584f-4a0a-aab3-55c0b12d2d97"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <nova:ports>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <nova:port uuid="cdbbf387-8780-4514-a372-9c5160d9e694">
Feb 02 11:43:10 compute-0 nova_compute[251290]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         </nova:port>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       </nova:ports>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     </nova:instance>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   </metadata>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <sysinfo type="smbios">
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <system>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <entry name="manufacturer">RDO</entry>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <entry name="product">OpenStack Compute</entry>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <entry name="serial">336c40ec-af53-4724-8c01-2cec821a49f3</entry>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <entry name="uuid">336c40ec-af53-4724-8c01-2cec821a49f3</entry>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <entry name="family">Virtual Machine</entry>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     </system>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   </sysinfo>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <os>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <boot dev="hd"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <smbios mode="sysinfo"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   </os>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <features>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <acpi/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <apic/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <vmcoreinfo/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   </features>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <clock offset="utc">
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <timer name="hpet" present="no"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   </clock>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <cpu mode="host-model" match="exact">
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   </cpu>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   <devices>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <disk type="network" device="disk">
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <driver type="raw" cache="none"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <source protocol="rbd" name="vms/336c40ec-af53-4724-8c01-2cec821a49f3_disk">
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <host name="192.168.122.100" port="6789"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <host name="192.168.122.102" port="6789"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <host name="192.168.122.101" port="6789"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       </source>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <auth username="openstack">
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <secret type="ceph" uuid="1d33f80b-d6ca-501c-bac7-184379b89279"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <target dev="vda" bus="virtio"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <disk type="network" device="cdrom">
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <driver type="raw" cache="none"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <source protocol="rbd" name="vms/336c40ec-af53-4724-8c01-2cec821a49f3_disk.config">
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <host name="192.168.122.100" port="6789"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <host name="192.168.122.102" port="6789"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <host name="192.168.122.101" port="6789"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       </source>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <auth username="openstack">
Feb 02 11:43:10 compute-0 nova_compute[251290]:         <secret type="ceph" uuid="1d33f80b-d6ca-501c-bac7-184379b89279"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       </auth>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <target dev="sda" bus="sata"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     </disk>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <interface type="ethernet">
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <mac address="fa:16:3e:f8:9e:2c"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <model type="virtio"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <mtu size="1442"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <target dev="tapcdbbf387-87"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     </interface>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <serial type="pty">
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <log file="/var/lib/nova/instances/336c40ec-af53-4724-8c01-2cec821a49f3/console.log" append="off"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     </serial>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <video>
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <model type="virtio"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     </video>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <input type="tablet" bus="usb"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <rng model="virtio">
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <backend model="random">/dev/urandom</backend>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     </rng>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <controller type="usb" index="0"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     <memballoon model="virtio">
Feb 02 11:43:10 compute-0 nova_compute[251290]:       <stats period="10"/>
Feb 02 11:43:10 compute-0 nova_compute[251290]:     </memballoon>
Feb 02 11:43:10 compute-0 nova_compute[251290]:   </devices>
Feb 02 11:43:10 compute-0 nova_compute[251290]: </domain>
Feb 02 11:43:10 compute-0 nova_compute[251290]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.430 251294 DEBUG nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Preparing to wait for external event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.430 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.431 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.431 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.432 251294 DEBUG nova.virt.libvirt.vif [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:42:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-640321398',display_name='tempest-TestNetworkBasicOps-server-640321398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-640321398',id=11,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPKrY/FdowAFVklD/wXNvtBo9XHL36YOm0NhB6ClNMLwm16hZQnK9nS2yj5JzsZjBPQ4VmPqcFH9YEh8Lga2thc5KERzORTkCZDm1xZtV4CAwrPf0SkJpTQ6TioHUEA/3g==',key_name='tempest-TestNetworkBasicOps-2123245503',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-lvum5ybr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:43:00Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=336c40ec-af53-4724-8c01-2cec821a49f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.432 251294 DEBUG nova.network.os_vif_util [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.433 251294 DEBUG nova.network.os_vif_util [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:9e:2c,bridge_name='br-int',has_traffic_filtering=True,id=cdbbf387-8780-4514-a372-9c5160d9e694,network=Network(48206d91-8044-421b-87db-54dbeb1ce4a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdbbf387-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.433 251294 DEBUG os_vif [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:9e:2c,bridge_name='br-int',has_traffic_filtering=True,id=cdbbf387-8780-4514-a372-9c5160d9e694,network=Network(48206d91-8044-421b-87db-54dbeb1ce4a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdbbf387-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.434 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.434 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.435 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.438 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.438 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdbbf387-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.439 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcdbbf387-87, col_values=(('external_ids', {'iface-id': 'cdbbf387-8780-4514-a372-9c5160d9e694', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:9e:2c', 'vm-uuid': '336c40ec-af53-4724-8c01-2cec821a49f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.441 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:10 compute-0 NetworkManager[49067]: <info>  [1770032590.4428] manager: (tapcdbbf387-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.444 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.450 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.451 251294 INFO os_vif [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:9e:2c,bridge_name='br-int',has_traffic_filtering=True,id=cdbbf387-8780-4514-a372-9c5160d9e694,network=Network(48206d91-8044-421b-87db-54dbeb1ce4a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdbbf387-87')
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.512 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.512 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.512 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] No VIF found with MAC fa:16:3e:f8:9e:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.513 251294 INFO nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Using config drive
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.540 251294 DEBUG nova.storage.rbd_utils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 336c40ec-af53-4724-8c01-2cec821a49f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:43:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:43:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2600510194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:43:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/336338065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:43:10 compute-0 nova_compute[251290]: 2026-02-02 11:43:10.801 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.092 251294 INFO nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Creating config drive at /var/lib/nova/instances/336c40ec-af53-4724-8c01-2cec821a49f3/disk.config
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.096 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/336c40ec-af53-4724-8c01-2cec821a49f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpohyj2o76 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.220 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/336c40ec-af53-4724-8c01-2cec821a49f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpohyj2o76" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.252 251294 DEBUG nova.storage.rbd_utils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] rbd image 336c40ec-af53-4724-8c01-2cec821a49f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.258 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/336c40ec-af53-4724-8c01-2cec821a49f3/disk.config 336c40ec-af53-4724-8c01-2cec821a49f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.458 251294 DEBUG oslo_concurrency.processutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/336c40ec-af53-4724-8c01-2cec821a49f3/disk.config 336c40ec-af53-4724-8c01-2cec821a49f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.459 251294 INFO nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Deleting local config drive /var/lib/nova/instances/336c40ec-af53-4724-8c01-2cec821a49f3/disk.config because it was imported into RBD.
Feb 02 11:43:11 compute-0 kernel: tapcdbbf387-87: entered promiscuous mode
Feb 02 11:43:11 compute-0 NetworkManager[49067]: <info>  [1770032591.5105] manager: (tapcdbbf387-87): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Feb 02 11:43:11 compute-0 ovn_controller[154901]: 2026-02-02T11:43:11Z|00088|binding|INFO|Claiming lport cdbbf387-8780-4514-a372-9c5160d9e694 for this chassis.
Feb 02 11:43:11 compute-0 ovn_controller[154901]: 2026-02-02T11:43:11Z|00089|binding|INFO|cdbbf387-8780-4514-a372-9c5160d9e694: Claiming fa:16:3e:f8:9e:2c 10.100.0.14
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.512 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.515 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.518 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.533 251294 DEBUG nova.network.neutron [req-e2e883e3-00a1-4954-9c3a-3b41415ebbd3 req-ab791f89-ed97-49b2-90c4-3770a9d9ddb4 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updated VIF entry in instance network info cache for port cdbbf387-8780-4514-a372-9c5160d9e694. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.534 251294 DEBUG nova.network.neutron [req-e2e883e3-00a1-4954-9c3a-3b41415ebbd3 req-ab791f89-ed97-49b2-90c4-3770a9d9ddb4 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updating instance_info_cache with network_info: [{"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.531 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:9e:2c 10.100.0.14'], port_security=['fa:16:3e:f8:9e:2c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '336c40ec-af53-4724-8c01-2cec821a49f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48206d91-8044-421b-87db-54dbeb1ce4a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f04aacc5-0ff4-4573-bbb3-5840590a43e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84348474-ef23-4e8d-9a74-21bd4ad8f865, chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=cdbbf387-8780-4514-a372-9c5160d9e694) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.533 165304 INFO neutron.agent.ovn.metadata.agent [-] Port cdbbf387-8780-4514-a372-9c5160d9e694 in datapath 48206d91-8044-421b-87db-54dbeb1ce4a4 bound to our chassis
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.534 165304 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48206d91-8044-421b-87db-54dbeb1ce4a4
Feb 02 11:43:11 compute-0 systemd-udevd[270310]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.544 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.543 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[a5536a1a-1f94-4ef3-a201-76b8192635f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.545 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48206d91-81 in ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 11:43:11 compute-0 systemd-machined[218018]: New machine qemu-5-instance-0000000b.
Feb 02 11:43:11 compute-0 ovn_controller[154901]: 2026-02-02T11:43:11Z|00090|binding|INFO|Setting lport cdbbf387-8780-4514-a372-9c5160d9e694 ovn-installed in OVS
Feb 02 11:43:11 compute-0 ovn_controller[154901]: 2026-02-02T11:43:11Z|00091|binding|INFO|Setting lport cdbbf387-8780-4514-a372-9c5160d9e694 up in Southbound
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.547 258380 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48206d91-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.547 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[10bf348a-61c8-4465-9527-f9b24da02dfc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.548 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.549 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[b989f962-4bac-4b09-96f2-56d18085b6fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 NetworkManager[49067]: <info>  [1770032591.5576] device (tapcdbbf387-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 11:43:11 compute-0 NetworkManager[49067]: <info>  [1770032591.5581] device (tapcdbbf387-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 11:43:11 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-0000000b.
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.557 251294 DEBUG oslo_concurrency.lockutils [req-e2e883e3-00a1-4954-9c3a-3b41415ebbd3 req-ab791f89-ed97-49b2-90c4-3770a9d9ddb4 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:43:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:11.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.560 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[466d1b1c-bd1a-4562-b1d9-b367ea5b1576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.574 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bc1ee7-e7cd-4555-b412-3953d0107237]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.605 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1bb23e-868d-46bc-9bb3-fdfe534d13b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 NetworkManager[49067]: <info>  [1770032591.6120] manager: (tap48206d91-80): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.611 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[ed20f438-0596-4c75-9929-d72e99c0ffa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.638 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b0945d-c175-4d6a-9692-b4185f78f42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.642 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[fc87185d-c70d-427e-b5bf-0a6149a70680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 NetworkManager[49067]: <info>  [1770032591.6617] device (tap48206d91-80): carrier: link connected
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.665 258398 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bb2a9e-074b-4794-a6d3-7ab6189423ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.683 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[365ccc6a-624a-47e2-b95c-6d7c169e1913]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48206d91-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:ac:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424592, 'reachable_time': 39634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270344, 'error': None, 'target': 'ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.699 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[f112fe60-0449-499f-99a1-2d83c6041f75]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:ace2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424592, 'tstamp': 424592}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270345, 'error': None, 'target': 'ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:11.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.718 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9298c6-fbfc-4744-bbc6-2d0a8ff03b0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48206d91-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:ac:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424592, 'reachable_time': 39634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270346, 'error': None, 'target': 'ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.753 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[8debb25a-58f2-4960-947e-e7be42b8a874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ceph-mon[74676]: pgmap v946: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:43:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.812 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d2e7cd-8038-44de-878b-42cb3c5bac3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.814 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48206d91-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.814 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.815 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48206d91-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:43:11 compute-0 NetworkManager[49067]: <info>  [1770032591.8183] manager: (tap48206d91-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Feb 02 11:43:11 compute-0 kernel: tap48206d91-80: entered promiscuous mode
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.817 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.821 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48206d91-80, col_values=(('external_ids', {'iface-id': 'e53a65bd-fb67-43d4-8b51-266eb0a52069'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:43:11 compute-0 ovn_controller[154901]: 2026-02-02T11:43:11Z|00092|binding|INFO|Releasing lport e53a65bd-fb67-43d4-8b51-266eb0a52069 from this chassis (sb_readonly=0)
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.823 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.823 165304 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48206d91-8044-421b-87db-54dbeb1ce4a4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48206d91-8044-421b-87db-54dbeb1ce4a4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.824 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[2db453ae-5634-44ca-8856-83ea89fa6f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.825 165304 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: global
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     log         /dev/log local0 debug
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     log-tag     haproxy-metadata-proxy-48206d91-8044-421b-87db-54dbeb1ce4a4
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     user        root
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     group       root
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     maxconn     1024
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     pidfile     /var/lib/neutron/external/pids/48206d91-8044-421b-87db-54dbeb1ce4a4.pid.haproxy
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     daemon
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: defaults
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     log global
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     mode http
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     option httplog
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     option dontlognull
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     option http-server-close
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     option forwardfor
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     retries                 3
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     timeout http-request    30s
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     timeout connect         30s
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     timeout client          32s
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     timeout server          32s
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     timeout http-keep-alive 30s
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: listen listener
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     bind 169.254.169.254:80
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:     http-request add-header X-OVN-Network-ID 48206d91-8044-421b-87db-54dbeb1ce4a4
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 11:43:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:11.826 165304 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4', 'env', 'PROCESS_TAG=haproxy-48206d91-8044-421b-87db-54dbeb1ce4a4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48206d91-8044-421b-87db-54dbeb1ce4a4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 11:43:11 compute-0 nova_compute[251290]: 2026-02-02 11:43:11.829 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.183 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032592.1823075, 336c40ec-af53-4724-8c01-2cec821a49f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.183 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] VM Started (Lifecycle Event)
Feb 02 11:43:12 compute-0 podman[270419]: 2026-02-02 11:43:12.207342816 +0000 UTC m=+0.051390764 container create 606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.209 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.218 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032592.1824455, 336c40ec-af53-4724-8c01-2cec821a49f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.219 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] VM Paused (Lifecycle Event)
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.249 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.253 251294 DEBUG nova.compute.manager [req-8a775cc0-8c3d-4a34-9874-82f68863127d req-ecd7cc73-50eb-487a-b712-964d2bff36ae 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.254 251294 DEBUG oslo_concurrency.lockutils [req-8a775cc0-8c3d-4a34-9874-82f68863127d req-ecd7cc73-50eb-487a-b712-964d2bff36ae 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.254 251294 DEBUG oslo_concurrency.lockutils [req-8a775cc0-8c3d-4a34-9874-82f68863127d req-ecd7cc73-50eb-487a-b712-964d2bff36ae 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.254 251294 DEBUG oslo_concurrency.lockutils [req-8a775cc0-8c3d-4a34-9874-82f68863127d req-ecd7cc73-50eb-487a-b712-964d2bff36ae 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:43:12 compute-0 systemd[1]: Started libpod-conmon-606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a.scope.
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.255 251294 DEBUG nova.compute.manager [req-8a775cc0-8c3d-4a34-9874-82f68863127d req-ecd7cc73-50eb-487a-b712-964d2bff36ae 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Processing event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.257 251294 DEBUG nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.262 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.263 251294 DEBUG nova.virt.driver [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] Emitting event <LifecycleEvent: 1770032592.260588, 336c40ec-af53-4724-8c01-2cec821a49f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.264 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] VM Resumed (Lifecycle Event)
Feb 02 11:43:12 compute-0 podman[270419]: 2026-02-02 11:43:12.176310917 +0000 UTC m=+0.020358875 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.274 251294 INFO nova.virt.libvirt.driver [-] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Instance spawned successfully.
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.274 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 11:43:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbb56cae36ee2135700c62e37075493dcd4fa9fb282f7ff5bdd844dfe01b900/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.301 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.311 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.312 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.312 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.313 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:43:12 compute-0 podman[270419]: 2026-02-02 11:43:12.313710695 +0000 UTC m=+0.157758653 container init 606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.313 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.314 251294 DEBUG nova.virt.libvirt.driver [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.318 251294 DEBUG nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 11:43:12 compute-0 podman[270419]: 2026-02-02 11:43:12.31947227 +0000 UTC m=+0.163520208 container start 606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:43:12 compute-0 neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4[270435]: [NOTICE]   (270439) : New worker (270441) forked
Feb 02 11:43:12 compute-0 neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4[270435]: [NOTICE]   (270439) : Loading success.
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.363 251294 INFO nova.compute.manager [None req-40601fa8-ff5d-41fd-99a9-18e3c7e10071 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.401 251294 INFO nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Took 11.31 seconds to spawn the instance on the hypervisor.
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.401 251294 DEBUG nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.485 251294 INFO nova.compute.manager [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Took 12.63 seconds to build instance.
Feb 02 11:43:12 compute-0 nova_compute[251290]: 2026-02-02 11:43:12.511 251294 DEBUG oslo_concurrency.lockutils [None req-69e8b346-8e6b-4ec5-b9e2-6096bbda1fcd abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:43:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:43:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:13.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:13.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:13 compute-0 ceph-mon[74676]: pgmap v947: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:43:14 compute-0 nova_compute[251290]: 2026-02-02 11:43:14.382 251294 DEBUG nova.compute.manager [req-94df550f-6be0-4e8f-a1b3-e02f4c40ddcd req-0ad57ed4-d385-49c0-9972-93a9fd6593c3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:43:14 compute-0 nova_compute[251290]: 2026-02-02 11:43:14.383 251294 DEBUG oslo_concurrency.lockutils [req-94df550f-6be0-4e8f-a1b3-e02f4c40ddcd req-0ad57ed4-d385-49c0-9972-93a9fd6593c3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:43:14 compute-0 nova_compute[251290]: 2026-02-02 11:43:14.383 251294 DEBUG oslo_concurrency.lockutils [req-94df550f-6be0-4e8f-a1b3-e02f4c40ddcd req-0ad57ed4-d385-49c0-9972-93a9fd6593c3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:43:14 compute-0 nova_compute[251290]: 2026-02-02 11:43:14.383 251294 DEBUG oslo_concurrency.lockutils [req-94df550f-6be0-4e8f-a1b3-e02f4c40ddcd req-0ad57ed4-d385-49c0-9972-93a9fd6593c3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:43:14 compute-0 nova_compute[251290]: 2026-02-02 11:43:14.384 251294 DEBUG nova.compute.manager [req-94df550f-6be0-4e8f-a1b3-e02f4c40ddcd req-0ad57ed4-d385-49c0-9972-93a9fd6593c3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] No waiting events found dispatching network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:43:14 compute-0 nova_compute[251290]: 2026-02-02 11:43:14.384 251294 WARNING nova.compute.manager [req-94df550f-6be0-4e8f-a1b3-e02f4c40ddcd req-0ad57ed4-d385-49c0-9972-93a9fd6593c3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received unexpected event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 for instance with vm_state active and task_state None.
Feb 02 11:43:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:43:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:43:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:43:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:43:15 compute-0 nova_compute[251290]: 2026-02-02 11:43:15.442 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:15.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:15.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:15 compute-0 nova_compute[251290]: 2026-02-02 11:43:15.803 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:15 compute-0 ceph-mon[74676]: pgmap v948: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:43:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:16 compute-0 nova_compute[251290]: 2026-02-02 11:43:16.166 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:16 compute-0 ovn_controller[154901]: 2026-02-02T11:43:16Z|00093|binding|INFO|Releasing lport e53a65bd-fb67-43d4-8b51-266eb0a52069 from this chassis (sb_readonly=0)
Feb 02 11:43:16 compute-0 NetworkManager[49067]: <info>  [1770032596.1713] manager: (patch-br-int-to-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Feb 02 11:43:16 compute-0 NetworkManager[49067]: <info>  [1770032596.1723] manager: (patch-provnet-46b4df41-2a04-49e4-81ae-750fefd59cc3-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Feb 02 11:43:16 compute-0 nova_compute[251290]: 2026-02-02 11:43:16.181 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:16 compute-0 ovn_controller[154901]: 2026-02-02T11:43:16Z|00094|binding|INFO|Releasing lport e53a65bd-fb67-43d4-8b51-266eb0a52069 from this chassis (sb_readonly=0)
Feb 02 11:43:16 compute-0 nova_compute[251290]: 2026-02-02 11:43:16.186 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:43:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:16] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb 02 11:43:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:16] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb 02 11:43:17 compute-0 nova_compute[251290]: 2026-02-02 11:43:17.011 251294 DEBUG nova.compute.manager [req-826c4370-3d78-42b5-8a50-255da8e4373e req-7f30646e-c94c-4d0e-b6ed-65a48d3b2492 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-changed-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:43:17 compute-0 nova_compute[251290]: 2026-02-02 11:43:17.011 251294 DEBUG nova.compute.manager [req-826c4370-3d78-42b5-8a50-255da8e4373e req-7f30646e-c94c-4d0e-b6ed-65a48d3b2492 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Refreshing instance network info cache due to event network-changed-cdbbf387-8780-4514-a372-9c5160d9e694. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:43:17 compute-0 nova_compute[251290]: 2026-02-02 11:43:17.011 251294 DEBUG oslo_concurrency.lockutils [req-826c4370-3d78-42b5-8a50-255da8e4373e req-7f30646e-c94c-4d0e-b6ed-65a48d3b2492 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:43:17 compute-0 nova_compute[251290]: 2026-02-02 11:43:17.011 251294 DEBUG oslo_concurrency.lockutils [req-826c4370-3d78-42b5-8a50-255da8e4373e req-7f30646e-c94c-4d0e-b6ed-65a48d3b2492 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:43:17 compute-0 nova_compute[251290]: 2026-02-02 11:43:17.012 251294 DEBUG nova.network.neutron [req-826c4370-3d78-42b5-8a50-255da8e4373e req-7f30646e-c94c-4d0e-b6ed-65a48d3b2492 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Refreshing network info cache for port cdbbf387-8780-4514-a372-9c5160d9e694 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:43:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:17.172Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:43:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:17.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:17.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:17.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:18 compute-0 ceph-mon[74676]: pgmap v949: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:43:18 compute-0 nova_compute[251290]: 2026-02-02 11:43:18.261 251294 DEBUG nova.network.neutron [req-826c4370-3d78-42b5-8a50-255da8e4373e req-7f30646e-c94c-4d0e-b6ed-65a48d3b2492 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updated VIF entry in instance network info cache for port cdbbf387-8780-4514-a372-9c5160d9e694. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:43:18 compute-0 nova_compute[251290]: 2026-02-02 11:43:18.262 251294 DEBUG nova.network.neutron [req-826c4370-3d78-42b5-8a50-255da8e4373e req-7f30646e-c94c-4d0e-b6ed-65a48d3b2492 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updating instance_info_cache with network_info: [{"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:43:18 compute-0 nova_compute[251290]: 2026-02-02 11:43:18.323 251294 DEBUG oslo_concurrency.lockutils [req-826c4370-3d78-42b5-8a50-255da8e4373e req-7f30646e-c94c-4d0e-b6ed-65a48d3b2492 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:43:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 02 11:43:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:18.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:19.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:43:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:19.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:43:20 compute-0 ceph-mon[74676]: pgmap v950: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 02 11:43:20 compute-0 nova_compute[251290]: 2026-02-02 11:43:20.444 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 02 11:43:20 compute-0 nova_compute[251290]: 2026-02-02 11:43:20.807 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:21 compute-0 podman[270460]: 2026-02-02 11:43:21.273611087 +0000 UTC m=+0.052739773 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 11:43:21 compute-0 podman[270461]: 2026-02-02 11:43:21.300123337 +0000 UTC m=+0.078122790 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:43:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:21.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:21.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:22 compute-0 ceph-mon[74676]: pgmap v951: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 02 11:43:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:22.681 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:43:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:22.682 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:43:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:43:22.683 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:43:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Feb 02 11:43:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:23.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:23.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:24 compute-0 ceph-mon[74676]: pgmap v952: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Feb 02 11:43:24 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3337405541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 02 11:43:24 compute-0 ovn_controller[154901]: 2026-02-02T11:43:24Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:9e:2c 10.100.0.14
Feb 02 11:43:24 compute-0 ovn_controller[154901]: 2026-02-02T11:43:24Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:9e:2c 10.100.0.14
Feb 02 11:43:25 compute-0 nova_compute[251290]: 2026-02-02 11:43:25.447 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:25.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:25.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:25 compute-0 nova_compute[251290]: 2026-02-02 11:43:25.811 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:26 compute-0 ceph-mon[74676]: pgmap v953: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 02 11:43:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Feb 02 11:43:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:26] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb 02 11:43:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:26] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb 02 11:43:27 compute-0 sudo[270507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:43:27 compute-0 sudo[270507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:27 compute-0 sudo[270507]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:27.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:27.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:27.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:28 compute-0 ceph-mon[74676]: pgmap v954: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Feb 02 11:43:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 3.9 MiB/s wr, 86 op/s
Feb 02 11:43:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:28.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:43:29
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.nfs', 'backups', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'vms', 'images']
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:43:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:29.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:43:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:43:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:29.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001102599282900306 of space, bias 1.0, pg target 0.3307797848700918 quantized to 32 (current 32)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:43:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:43:30 compute-0 ceph-mon[74676]: pgmap v955: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 3.9 MiB/s wr, 86 op/s
Feb 02 11:43:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:43:30 compute-0 nova_compute[251290]: 2026-02-02 11:43:30.449 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Feb 02 11:43:30 compute-0 nova_compute[251290]: 2026-02-02 11:43:30.815 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/758386311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:43:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1711092124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:43:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:31.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:31.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:32 compute-0 ceph-mon[74676]: pgmap v956: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Feb 02 11:43:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 254 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Feb 02 11:43:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:33.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:33.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:34 compute-0 ceph-mon[74676]: pgmap v957: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 254 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Feb 02 11:43:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3554402046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Feb 02 11:43:35 compute-0 nova_compute[251290]: 2026-02-02 11:43:35.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:43:35 compute-0 nova_compute[251290]: 2026-02-02 11:43:35.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:43:35 compute-0 nova_compute[251290]: 2026-02-02 11:43:35.450 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:35.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2942424590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2950039309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:35.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:35 compute-0 nova_compute[251290]: 2026-02-02 11:43:35.817 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.045 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.046 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.046 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.046 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.046 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:43:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:43:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062531032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.552 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.633 251294 DEBUG nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.634 251294 DEBUG nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 11:43:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Feb 02 11:43:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.812 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.814 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4337MB free_disk=59.92201232910156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.814 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.815 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.899 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Instance 336c40ec-af53-4724-8c01-2cec821a49f3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.899 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.900 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:43:36 compute-0 nova_compute[251290]: 2026-02-02 11:43:36.948 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:43:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:36] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Feb 02 11:43:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:36] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Feb 02 11:43:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:37.175Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:43:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:37.176Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:43:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:37.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:43:37 compute-0 ceph-mon[74676]: pgmap v958: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Feb 02 11:43:37 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4240097611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:37 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4062531032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:43:37 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74210135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:37 compute-0 nova_compute[251290]: 2026-02-02 11:43:37.448 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:43:37 compute-0 nova_compute[251290]: 2026-02-02 11:43:37.454 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:43:37 compute-0 nova_compute[251290]: 2026-02-02 11:43:37.472 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:43:37 compute-0 nova_compute[251290]: 2026-02-02 11:43:37.500 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:43:37 compute-0 nova_compute[251290]: 2026-02-02 11:43:37.501 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:43:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:37.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:37.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:38 compute-0 ceph-mon[74676]: pgmap v959: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Feb 02 11:43:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/74210135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:43:38 compute-0 nova_compute[251290]: 2026-02-02 11:43:38.502 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:43:38 compute-0 nova_compute[251290]: 2026-02-02 11:43:38.502 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:43:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 869 KiB/s rd, 27 KiB/s wr, 40 op/s
Feb 02 11:43:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:38.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:39 compute-0 nova_compute[251290]: 2026-02-02 11:43:39.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:43:39 compute-0 nova_compute[251290]: 2026-02-02 11:43:39.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:43:39 compute-0 nova_compute[251290]: 2026-02-02 11:43:39.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:43:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:39.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:43:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:39.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:43:40 compute-0 nova_compute[251290]: 2026-02-02 11:43:40.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:43:40 compute-0 ceph-mon[74676]: pgmap v960: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 869 KiB/s rd, 27 KiB/s wr, 40 op/s
Feb 02 11:43:40 compute-0 nova_compute[251290]: 2026-02-02 11:43:40.453 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 27 KiB/s wr, 67 op/s
Feb 02 11:43:40 compute-0 nova_compute[251290]: 2026-02-02 11:43:40.820 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:43:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:41.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:43:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:43:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:41.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:43:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:42 compute-0 nova_compute[251290]: 2026-02-02 11:43:42.012 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:43:42 compute-0 ceph-mon[74676]: pgmap v961: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 27 KiB/s wr, 67 op/s
Feb 02 11:43:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Feb 02 11:43:43 compute-0 nova_compute[251290]: 2026-02-02 11:43:43.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:43:43 compute-0 nova_compute[251290]: 2026-02-02 11:43:43.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:43:43 compute-0 nova_compute[251290]: 2026-02-02 11:43:43.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:43:43 compute-0 nova_compute[251290]: 2026-02-02 11:43:43.187 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:43:43 compute-0 nova_compute[251290]: 2026-02-02 11:43:43.188 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquired lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:43:43 compute-0 nova_compute[251290]: 2026-02-02 11:43:43.188 251294 DEBUG nova.network.neutron [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 11:43:43 compute-0 nova_compute[251290]: 2026-02-02 11:43:43.189 251294 DEBUG nova.objects.instance [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 336c40ec-af53-4724-8c01-2cec821a49f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:43:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:43.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:43.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:44 compute-0 ceph-mon[74676]: pgmap v962: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Feb 02 11:43:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3085650453' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:43:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3085650453' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:43:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:43:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:43:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Feb 02 11:43:45 compute-0 nova_compute[251290]: 2026-02-02 11:43:45.242 251294 DEBUG nova.network.neutron [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updating instance_info_cache with network_info: [{"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:43:45 compute-0 nova_compute[251290]: 2026-02-02 11:43:45.259 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Releasing lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:43:45 compute-0 nova_compute[251290]: 2026-02-02 11:43:45.260 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 11:43:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:43:45 compute-0 nova_compute[251290]: 2026-02-02 11:43:45.455 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:45.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:45.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:45 compute-0 nova_compute[251290]: 2026-02-02 11:43:45.822 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:46 compute-0 ceph-mon[74676]: pgmap v963: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Feb 02 11:43:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 188 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Feb 02 11:43:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:46] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Feb 02 11:43:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:46] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Feb 02 11:43:47 compute-0 sudo[270598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:43:47 compute-0 sudo[270598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:47 compute-0 sudo[270598]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:47.179Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:43:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:47.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:47.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:47 compute-0 sudo[270624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:43:47 compute-0 sudo[270624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:47 compute-0 sudo[270624]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:47.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:47 compute-0 sudo[270649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Feb 02 11:43:47 compute-0 sudo[270649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 sudo[270649]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 sudo[270696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:43:48 compute-0 sudo[270696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:48 compute-0 sudo[270696]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:48 compute-0 sudo[270721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:43:48 compute-0 sudo[270721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: pgmap v964: 353 pgs: 353 active+clean; 188 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Feb 02 11:43:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 sudo[270721]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 188 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 65 op/s
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:43:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 188 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.4 MiB/s wr, 78 op/s
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:43:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:43:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:43:48 compute-0 sudo[270779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:43:48 compute-0 sudo[270779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:48 compute-0 sudo[270779]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:48.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:48 compute-0 sudo[270804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:43:48 compute-0 sudo[270804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:49 compute-0 podman[270872]: 2026-02-02 11:43:49.262090615 +0000 UTC m=+0.038581317 container create f34050a3be202fefdacee769705c0fc037111df416dc1a62c7b5a66516da0cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:43:49 compute-0 systemd[1]: Started libpod-conmon-f34050a3be202fefdacee769705c0fc037111df416dc1a62c7b5a66516da0cbe.scope.
Feb 02 11:43:49 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:43:49 compute-0 podman[270872]: 2026-02-02 11:43:49.245266523 +0000 UTC m=+0.021757245 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:43:49 compute-0 podman[270872]: 2026-02-02 11:43:49.341523942 +0000 UTC m=+0.118014674 container init f34050a3be202fefdacee769705c0fc037111df416dc1a62c7b5a66516da0cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Feb 02 11:43:49 compute-0 podman[270872]: 2026-02-02 11:43:49.350075727 +0000 UTC m=+0.126566439 container start f34050a3be202fefdacee769705c0fc037111df416dc1a62c7b5a66516da0cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:43:49 compute-0 podman[270872]: 2026-02-02 11:43:49.353975339 +0000 UTC m=+0.130466071 container attach f34050a3be202fefdacee769705c0fc037111df416dc1a62c7b5a66516da0cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:43:49 compute-0 kind_poincare[270888]: 167 167
Feb 02 11:43:49 compute-0 systemd[1]: libpod-f34050a3be202fefdacee769705c0fc037111df416dc1a62c7b5a66516da0cbe.scope: Deactivated successfully.
Feb 02 11:43:49 compute-0 podman[270872]: 2026-02-02 11:43:49.35994197 +0000 UTC m=+0.136432672 container died f34050a3be202fefdacee769705c0fc037111df416dc1a62c7b5a66516da0cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3df8b2370f2c2be6731aed6a1759978330c9d8d85575e946bb0a73dd23f79718-merged.mount: Deactivated successfully.
Feb 02 11:43:49 compute-0 podman[270872]: 2026-02-02 11:43:49.405346252 +0000 UTC m=+0.181836954 container remove f34050a3be202fefdacee769705c0fc037111df416dc1a62c7b5a66516da0cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb 02 11:43:49 compute-0 systemd[1]: libpod-conmon-f34050a3be202fefdacee769705c0fc037111df416dc1a62c7b5a66516da0cbe.scope: Deactivated successfully.
Feb 02 11:43:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:43:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:43:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:43:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:43:49 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:43:49 compute-0 podman[270912]: 2026-02-02 11:43:49.543561404 +0000 UTC m=+0.042828099 container create 33c7b304f6bd867246df8f74d8426b03add9d9727e35516c9c467126d36e3d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_payne, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:43:49 compute-0 systemd[1]: Started libpod-conmon-33c7b304f6bd867246df8f74d8426b03add9d9727e35516c9c467126d36e3d68.scope.
Feb 02 11:43:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:49.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:49 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe29bdda7ade4f1a45b219d6dd47ab925fa8b759f3e3a62d297d48e9c6f39eb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe29bdda7ade4f1a45b219d6dd47ab925fa8b759f3e3a62d297d48e9c6f39eb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe29bdda7ade4f1a45b219d6dd47ab925fa8b759f3e3a62d297d48e9c6f39eb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe29bdda7ade4f1a45b219d6dd47ab925fa8b759f3e3a62d297d48e9c6f39eb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe29bdda7ade4f1a45b219d6dd47ab925fa8b759f3e3a62d297d48e9c6f39eb0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:49 compute-0 podman[270912]: 2026-02-02 11:43:49.525780454 +0000 UTC m=+0.025047169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:43:49 compute-0 podman[270912]: 2026-02-02 11:43:49.634260863 +0000 UTC m=+0.133527578 container init 33c7b304f6bd867246df8f74d8426b03add9d9727e35516c9c467126d36e3d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_payne, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:43:49 compute-0 podman[270912]: 2026-02-02 11:43:49.641953804 +0000 UTC m=+0.141220499 container start 33c7b304f6bd867246df8f74d8426b03add9d9727e35516c9c467126d36e3d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:43:49 compute-0 podman[270912]: 2026-02-02 11:43:49.646150024 +0000 UTC m=+0.145416809 container attach 33c7b304f6bd867246df8f74d8426b03add9d9727e35516c9c467126d36e3d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_payne, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:43:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:49.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:49 compute-0 nice_payne[270928]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:43:49 compute-0 nice_payne[270928]: --> All data devices are unavailable
Feb 02 11:43:49 compute-0 systemd[1]: libpod-33c7b304f6bd867246df8f74d8426b03add9d9727e35516c9c467126d36e3d68.scope: Deactivated successfully.
Feb 02 11:43:49 compute-0 podman[270912]: 2026-02-02 11:43:49.979264923 +0000 UTC m=+0.478531638 container died 33c7b304f6bd867246df8f74d8426b03add9d9727e35516c9c467126d36e3d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe29bdda7ade4f1a45b219d6dd47ab925fa8b759f3e3a62d297d48e9c6f39eb0-merged.mount: Deactivated successfully.
Feb 02 11:43:50 compute-0 podman[270912]: 2026-02-02 11:43:50.022392009 +0000 UTC m=+0.521658704 container remove 33c7b304f6bd867246df8f74d8426b03add9d9727e35516c9c467126d36e3d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:43:50 compute-0 systemd[1]: libpod-conmon-33c7b304f6bd867246df8f74d8426b03add9d9727e35516c9c467126d36e3d68.scope: Deactivated successfully.
Feb 02 11:43:50 compute-0 sudo[270804]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:50 compute-0 sudo[270958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:43:50 compute-0 sudo[270958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:50 compute-0 sudo[270958]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:50 compute-0 sudo[270983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:43:50 compute-0 sudo[270983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:50 compute-0 nova_compute[251290]: 2026-02-02 11:43:50.456 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:50 compute-0 ceph-mon[74676]: pgmap v965: 353 pgs: 353 active+clean; 188 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 65 op/s
Feb 02 11:43:50 compute-0 ceph-mon[74676]: pgmap v966: 353 pgs: 353 active+clean; 188 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.4 MiB/s wr, 78 op/s
Feb 02 11:43:50 compute-0 podman[271048]: 2026-02-02 11:43:50.58966737 +0000 UTC m=+0.040537753 container create 239b0ea35833513fec9427aaed63fd42f31f852601fe9f381e19e930df194f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb 02 11:43:50 compute-0 systemd[1]: Started libpod-conmon-239b0ea35833513fec9427aaed63fd42f31f852601fe9f381e19e930df194f87.scope.
Feb 02 11:43:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:43:50 compute-0 podman[271048]: 2026-02-02 11:43:50.660781268 +0000 UTC m=+0.111651661 container init 239b0ea35833513fec9427aaed63fd42f31f852601fe9f381e19e930df194f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_haslett, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb 02 11:43:50 compute-0 podman[271048]: 2026-02-02 11:43:50.573182007 +0000 UTC m=+0.024052410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:43:50 compute-0 podman[271048]: 2026-02-02 11:43:50.667306906 +0000 UTC m=+0.118177289 container start 239b0ea35833513fec9427aaed63fd42f31f852601fe9f381e19e930df194f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_haslett, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:43:50 compute-0 nifty_haslett[271065]: 167 167
Feb 02 11:43:50 compute-0 systemd[1]: libpod-239b0ea35833513fec9427aaed63fd42f31f852601fe9f381e19e930df194f87.scope: Deactivated successfully.
Feb 02 11:43:50 compute-0 podman[271048]: 2026-02-02 11:43:50.673497193 +0000 UTC m=+0.124367576 container attach 239b0ea35833513fec9427aaed63fd42f31f852601fe9f381e19e930df194f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:43:50 compute-0 podman[271048]: 2026-02-02 11:43:50.675206732 +0000 UTC m=+0.126077115 container died 239b0ea35833513fec9427aaed63fd42f31f852601fe9f381e19e930df194f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1092bf4a8fe6a4f0f73c71298685ce253374218d51a96a4165a12767249b2d29-merged.mount: Deactivated successfully.
Feb 02 11:43:50 compute-0 podman[271048]: 2026-02-02 11:43:50.710297578 +0000 UTC m=+0.161167961 container remove 239b0ea35833513fec9427aaed63fd42f31f852601fe9f381e19e930df194f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:43:50 compute-0 systemd[1]: libpod-conmon-239b0ea35833513fec9427aaed63fd42f31f852601fe9f381e19e930df194f87.scope: Deactivated successfully.
Feb 02 11:43:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 193 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 622 KiB/s rd, 2.5 MiB/s wr, 71 op/s
Feb 02 11:43:50 compute-0 nova_compute[251290]: 2026-02-02 11:43:50.826 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:50 compute-0 podman[271088]: 2026-02-02 11:43:50.850450905 +0000 UTC m=+0.040065669 container create aad8b573254c3cacef47c54990d82498e5919b1421e8ae7eaa68296f0978ec05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:43:50 compute-0 systemd[1]: Started libpod-conmon-aad8b573254c3cacef47c54990d82498e5919b1421e8ae7eaa68296f0978ec05.scope.
Feb 02 11:43:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64707849cc004a936187691759e771c250328af9c1a53996419485109057d297/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64707849cc004a936187691759e771c250328af9c1a53996419485109057d297/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64707849cc004a936187691759e771c250328af9c1a53996419485109057d297/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64707849cc004a936187691759e771c250328af9c1a53996419485109057d297/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:50 compute-0 podman[271088]: 2026-02-02 11:43:50.834990582 +0000 UTC m=+0.024605366 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:43:50 compute-0 podman[271088]: 2026-02-02 11:43:50.944602424 +0000 UTC m=+0.134217208 container init aad8b573254c3cacef47c54990d82498e5919b1421e8ae7eaa68296f0978ec05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb 02 11:43:50 compute-0 podman[271088]: 2026-02-02 11:43:50.950093282 +0000 UTC m=+0.139708046 container start aad8b573254c3cacef47c54990d82498e5919b1421e8ae7eaa68296f0978ec05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:43:50 compute-0 podman[271088]: 2026-02-02 11:43:50.953765227 +0000 UTC m=+0.143380011 container attach aad8b573254c3cacef47c54990d82498e5919b1421e8ae7eaa68296f0978ec05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:43:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:51 compute-0 naughty_knuth[271105]: {
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:     "1": [
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:         {
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "devices": [
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "/dev/loop3"
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             ],
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "lv_name": "ceph_lv0",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "lv_size": "21470642176",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "name": "ceph_lv0",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "tags": {
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.cluster_name": "ceph",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.crush_device_class": "",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.encrypted": "0",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.osd_id": "1",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.type": "block",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.vdo": "0",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:                 "ceph.with_tpm": "0"
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             },
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "type": "block",
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:             "vg_name": "ceph_vg0"
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:         }
Feb 02 11:43:51 compute-0 naughty_knuth[271105]:     ]
Feb 02 11:43:51 compute-0 naughty_knuth[271105]: }
Feb 02 11:43:51 compute-0 systemd[1]: libpod-aad8b573254c3cacef47c54990d82498e5919b1421e8ae7eaa68296f0978ec05.scope: Deactivated successfully.
Feb 02 11:43:51 compute-0 podman[271088]: 2026-02-02 11:43:51.262026753 +0000 UTC m=+0.451641537 container died aad8b573254c3cacef47c54990d82498e5919b1421e8ae7eaa68296f0978ec05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_knuth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-64707849cc004a936187691759e771c250328af9c1a53996419485109057d297-merged.mount: Deactivated successfully.
Feb 02 11:43:51 compute-0 podman[271088]: 2026-02-02 11:43:51.299313922 +0000 UTC m=+0.488928686 container remove aad8b573254c3cacef47c54990d82498e5919b1421e8ae7eaa68296f0978ec05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_knuth, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:43:51 compute-0 systemd[1]: libpod-conmon-aad8b573254c3cacef47c54990d82498e5919b1421e8ae7eaa68296f0978ec05.scope: Deactivated successfully.
Feb 02 11:43:51 compute-0 sudo[270983]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:51 compute-0 podman[271126]: 2026-02-02 11:43:51.376813424 +0000 UTC m=+0.058857319 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 02 11:43:51 compute-0 podman[271127]: 2026-02-02 11:43:51.402345805 +0000 UTC m=+0.083752381 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 02 11:43:51 compute-0 sudo[271159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:43:51 compute-0 sudo[271159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:51 compute-0 sudo[271159]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:51 compute-0 sudo[271194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:43:51 compute-0 sudo[271194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:51.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:51.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:51 compute-0 podman[271265]: 2026-02-02 11:43:51.821537711 +0000 UTC m=+0.033576123 container create bfbb68504cc258154d871a10b97ab482a7271b9e7a1333e41b9e22b9014633e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:43:51 compute-0 systemd[1]: Started libpod-conmon-bfbb68504cc258154d871a10b97ab482a7271b9e7a1333e41b9e22b9014633e4.scope.
Feb 02 11:43:51 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:43:51 compute-0 podman[271265]: 2026-02-02 11:43:51.898732664 +0000 UTC m=+0.110771106 container init bfbb68504cc258154d871a10b97ab482a7271b9e7a1333e41b9e22b9014633e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb 02 11:43:51 compute-0 podman[271265]: 2026-02-02 11:43:51.808176508 +0000 UTC m=+0.020214950 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:43:51 compute-0 podman[271265]: 2026-02-02 11:43:51.907705601 +0000 UTC m=+0.119744013 container start bfbb68504cc258154d871a10b97ab482a7271b9e7a1333e41b9e22b9014633e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_roentgen, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:43:51 compute-0 podman[271265]: 2026-02-02 11:43:51.913076985 +0000 UTC m=+0.125115417 container attach bfbb68504cc258154d871a10b97ab482a7271b9e7a1333e41b9e22b9014633e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:43:51 compute-0 jovial_roentgen[271282]: 167 167
Feb 02 11:43:51 compute-0 systemd[1]: libpod-bfbb68504cc258154d871a10b97ab482a7271b9e7a1333e41b9e22b9014633e4.scope: Deactivated successfully.
Feb 02 11:43:51 compute-0 podman[271265]: 2026-02-02 11:43:51.915716971 +0000 UTC m=+0.127755383 container died bfbb68504cc258154d871a10b97ab482a7271b9e7a1333e41b9e22b9014633e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-79480c75e06ea5d6690186e4c124b04ca2a3bb70563124b788aa5219dbb32a44-merged.mount: Deactivated successfully.
Feb 02 11:43:51 compute-0 podman[271265]: 2026-02-02 11:43:51.955329547 +0000 UTC m=+0.167367959 container remove bfbb68504cc258154d871a10b97ab482a7271b9e7a1333e41b9e22b9014633e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_roentgen, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:43:51 compute-0 systemd[1]: libpod-conmon-bfbb68504cc258154d871a10b97ab482a7271b9e7a1333e41b9e22b9014633e4.scope: Deactivated successfully.
Feb 02 11:43:52 compute-0 podman[271305]: 2026-02-02 11:43:52.108249919 +0000 UTC m=+0.046655698 container create 107697d0e2253aef0c64c7c3eff4eeb4ff9c1a1d7e8c82a66aa5b08dd6ea0351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_franklin, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:43:52 compute-0 systemd[1]: Started libpod-conmon-107697d0e2253aef0c64c7c3eff4eeb4ff9c1a1d7e8c82a66aa5b08dd6ea0351.scope.
Feb 02 11:43:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d47f4499206e3308fe00123d288fe671dd033433c4f38a8951e1b583042c76c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d47f4499206e3308fe00123d288fe671dd033433c4f38a8951e1b583042c76c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d47f4499206e3308fe00123d288fe671dd033433c4f38a8951e1b583042c76c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d47f4499206e3308fe00123d288fe671dd033433c4f38a8951e1b583042c76c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:43:52 compute-0 podman[271305]: 2026-02-02 11:43:52.087588297 +0000 UTC m=+0.025993896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:43:52 compute-0 podman[271305]: 2026-02-02 11:43:52.188426517 +0000 UTC m=+0.126832106 container init 107697d0e2253aef0c64c7c3eff4eeb4ff9c1a1d7e8c82a66aa5b08dd6ea0351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_franklin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:43:52 compute-0 podman[271305]: 2026-02-02 11:43:52.194052749 +0000 UTC m=+0.132458328 container start 107697d0e2253aef0c64c7c3eff4eeb4ff9c1a1d7e8c82a66aa5b08dd6ea0351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:43:52 compute-0 podman[271305]: 2026-02-02 11:43:52.198107595 +0000 UTC m=+0.136513194 container attach 107697d0e2253aef0c64c7c3eff4eeb4ff9c1a1d7e8c82a66aa5b08dd6ea0351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:43:52 compute-0 ceph-mon[74676]: pgmap v967: 353 pgs: 353 active+clean; 193 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 622 KiB/s rd, 2.5 MiB/s wr, 71 op/s
Feb 02 11:43:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 393 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Feb 02 11:43:52 compute-0 lvm[271395]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:43:52 compute-0 lvm[271395]: VG ceph_vg0 finished
Feb 02 11:43:52 compute-0 sharp_franklin[271321]: {}
Feb 02 11:43:52 compute-0 systemd[1]: libpod-107697d0e2253aef0c64c7c3eff4eeb4ff9c1a1d7e8c82a66aa5b08dd6ea0351.scope: Deactivated successfully.
Feb 02 11:43:52 compute-0 podman[271305]: 2026-02-02 11:43:52.876502971 +0000 UTC m=+0.814908550 container died 107697d0e2253aef0c64c7c3eff4eeb4ff9c1a1d7e8c82a66aa5b08dd6ea0351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_franklin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d47f4499206e3308fe00123d288fe671dd033433c4f38a8951e1b583042c76c-merged.mount: Deactivated successfully.
Feb 02 11:43:52 compute-0 podman[271305]: 2026-02-02 11:43:52.923375125 +0000 UTC m=+0.861780704 container remove 107697d0e2253aef0c64c7c3eff4eeb4ff9c1a1d7e8c82a66aa5b08dd6ea0351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_franklin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Feb 02 11:43:52 compute-0 systemd[1]: libpod-conmon-107697d0e2253aef0c64c7c3eff4eeb4ff9c1a1d7e8c82a66aa5b08dd6ea0351.scope: Deactivated successfully.
Feb 02 11:43:52 compute-0 sudo[271194]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:43:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:43:52 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:53 compute-0 sudo[271410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:43:53 compute-0 sudo[271410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:43:53 compute-0 sudo[271410]: pam_unix(sudo:session): session closed for user root
Feb 02 11:43:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:53.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:43:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:53.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:43:53 compute-0 ceph-mon[74676]: pgmap v968: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 393 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Feb 02 11:43:53 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:53 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:43:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 393 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Feb 02 11:43:55 compute-0 nova_compute[251290]: 2026-02-02 11:43:55.460 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:55.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:43:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:55.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:43:55 compute-0 nova_compute[251290]: 2026-02-02 11:43:55.827 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:43:56 compute-0 ceph-mon[74676]: pgmap v969: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 393 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Feb 02 11:43:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:43:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:43:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:43:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:43:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:43:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 131 KiB/s wr, 44 op/s
Feb 02 11:43:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.799033) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032636799080, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1433, "num_deletes": 255, "total_data_size": 2727658, "memory_usage": 2757488, "flush_reason": "Manual Compaction"}
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032636819995, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2640900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26846, "largest_seqno": 28278, "table_properties": {"data_size": 2634170, "index_size": 3801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14028, "raw_average_key_size": 19, "raw_value_size": 2620636, "raw_average_value_size": 3644, "num_data_blocks": 167, "num_entries": 719, "num_filter_entries": 719, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032508, "oldest_key_time": 1770032508, "file_creation_time": 1770032636, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 21040 microseconds, and 4735 cpu microseconds.
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.820066) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2640900 bytes OK
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.820095) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.821925) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.821945) EVENT_LOG_v1 {"time_micros": 1770032636821940, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.821972) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2721344, prev total WAL file size 2721344, number of live WAL files 2.
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.822993) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2579KB)], [59(13MB)]
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032636823043, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17054509, "oldest_snapshot_seqno": -1}
Feb 02 11:43:56 compute-0 nova_compute[251290]: 2026-02-02 11:43:56.927 251294 INFO nova.compute.manager [None req-6f74e24d-28a0-4e61-b29f-c1f48042b2de abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Get console output
Feb 02 11:43:56 compute-0 nova_compute[251290]: 2026-02-02 11:43:56.934 258588 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6050 keys, 16905390 bytes, temperature: kUnknown
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032636940613, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 16905390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16861993, "index_size": 27159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 154162, "raw_average_key_size": 25, "raw_value_size": 16749988, "raw_average_value_size": 2768, "num_data_blocks": 1111, "num_entries": 6050, "num_filter_entries": 6050, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770032636, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.941148) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16905390 bytes
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.942894) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.7 rd, 143.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 13.7 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(12.9) write-amplify(6.4) OK, records in: 6578, records dropped: 528 output_compression: NoCompression
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.942920) EVENT_LOG_v1 {"time_micros": 1770032636942908, "job": 32, "event": "compaction_finished", "compaction_time_micros": 117867, "compaction_time_cpu_micros": 30756, "output_level": 6, "num_output_files": 1, "total_output_size": 16905390, "num_input_records": 6578, "num_output_records": 6050, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032636943512, "job": 32, "event": "table_file_deletion", "file_number": 61}
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032636945659, "job": 32, "event": "table_file_deletion", "file_number": 59}
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.822846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.945910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.945918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.945921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.945923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:43:56 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:43:56.945925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:43:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:56] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Feb 02 11:43:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:43:56] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Feb 02 11:43:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:57.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:57.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:43:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:43:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:57.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:43:58 compute-0 ceph-mon[74676]: pgmap v970: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 131 KiB/s wr, 44 op/s
Feb 02 11:43:58 compute-0 nova_compute[251290]: 2026-02-02 11:43:58.133 251294 DEBUG nova.compute.manager [req-6911e54f-94cf-4a50-a34e-de3a53921d6a req-e011dbde-1bd1-469c-ba04-22fee048674a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-changed-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:43:58 compute-0 nova_compute[251290]: 2026-02-02 11:43:58.134 251294 DEBUG nova.compute.manager [req-6911e54f-94cf-4a50-a34e-de3a53921d6a req-e011dbde-1bd1-469c-ba04-22fee048674a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Refreshing instance network info cache due to event network-changed-cdbbf387-8780-4514-a372-9c5160d9e694. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:43:58 compute-0 nova_compute[251290]: 2026-02-02 11:43:58.134 251294 DEBUG oslo_concurrency.lockutils [req-6911e54f-94cf-4a50-a34e-de3a53921d6a req-e011dbde-1bd1-469c-ba04-22fee048674a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:43:58 compute-0 nova_compute[251290]: 2026-02-02 11:43:58.134 251294 DEBUG oslo_concurrency.lockutils [req-6911e54f-94cf-4a50-a34e-de3a53921d6a req-e011dbde-1bd1-469c-ba04-22fee048674a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:43:58 compute-0 nova_compute[251290]: 2026-02-02 11:43:58.134 251294 DEBUG nova.network.neutron [req-6911e54f-94cf-4a50-a34e-de3a53921d6a req-e011dbde-1bd1-469c-ba04-22fee048674a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Refreshing network info cache for port cdbbf387-8780-4514-a372-9c5160d9e694 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:43:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 131 KiB/s wr, 44 op/s
Feb 02 11:43:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:43:58.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:43:59 compute-0 nova_compute[251290]: 2026-02-02 11:43:59.111 251294 INFO nova.compute.manager [None req-4b69d584-449e-46a8-97aa-80be341a2e7a abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Get console output
Feb 02 11:43:59 compute-0 nova_compute[251290]: 2026-02-02 11:43:59.118 258588 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 02 11:43:59 compute-0 nova_compute[251290]: 2026-02-02 11:43:59.378 251294 DEBUG nova.network.neutron [req-6911e54f-94cf-4a50-a34e-de3a53921d6a req-e011dbde-1bd1-469c-ba04-22fee048674a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updated VIF entry in instance network info cache for port cdbbf387-8780-4514-a372-9c5160d9e694. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:43:59 compute-0 nova_compute[251290]: 2026-02-02 11:43:59.379 251294 DEBUG nova.network.neutron [req-6911e54f-94cf-4a50-a34e-de3a53921d6a req-e011dbde-1bd1-469c-ba04-22fee048674a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updating instance_info_cache with network_info: [{"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:43:59 compute-0 nova_compute[251290]: 2026-02-02 11:43:59.397 251294 DEBUG oslo_concurrency.lockutils [req-6911e54f-94cf-4a50-a34e-de3a53921d6a req-e011dbde-1bd1-469c-ba04-22fee048674a 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:43:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:43:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:43:59.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:43:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:43:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:43:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:43:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:43:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:43:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:43:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:43:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:43:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:43:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:43:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:43:59.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:00 compute-0 ceph-mon[74676]: pgmap v971: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 131 KiB/s wr, 44 op/s
Feb 02 11:44:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.302 251294 DEBUG nova.compute.manager [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-vif-unplugged-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.302 251294 DEBUG oslo_concurrency.lockutils [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.302 251294 DEBUG oslo_concurrency.lockutils [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.303 251294 DEBUG oslo_concurrency.lockutils [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.303 251294 DEBUG nova.compute.manager [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] No waiting events found dispatching network-vif-unplugged-cdbbf387-8780-4514-a372-9c5160d9e694 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.303 251294 WARNING nova.compute.manager [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received unexpected event network-vif-unplugged-cdbbf387-8780-4514-a372-9c5160d9e694 for instance with vm_state active and task_state None.
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.303 251294 DEBUG nova.compute.manager [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.303 251294 DEBUG oslo_concurrency.lockutils [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.304 251294 DEBUG oslo_concurrency.lockutils [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.304 251294 DEBUG oslo_concurrency.lockutils [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.304 251294 DEBUG nova.compute.manager [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] No waiting events found dispatching network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.304 251294 WARNING nova.compute.manager [req-31d52682-6d72-471b-9865-7ebe1c6c0b5b req-22431180-1646-4385-80b5-dc0e3cdf1ab3 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received unexpected event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 for instance with vm_state active and task_state None.
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.463 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 110 KiB/s wr, 37 op/s
Feb 02 11:44:00 compute-0 nova_compute[251290]: 2026-02-02 11:44:00.830 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:01.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:01.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:01 compute-0 nova_compute[251290]: 2026-02-02 11:44:01.976 251294 INFO nova.compute.manager [None req-e75b713b-4500-48b9-b15b-0788d490d4d0 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Get console output
Feb 02 11:44:01 compute-0 nova_compute[251290]: 2026-02-02 11:44:01.982 258588 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 02 11:44:02 compute-0 ceph-mon[74676]: pgmap v972: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 110 KiB/s wr, 37 op/s
Feb 02 11:44:02 compute-0 nova_compute[251290]: 2026-02-02 11:44:02.751 251294 DEBUG nova.compute.manager [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-changed-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:44:02 compute-0 nova_compute[251290]: 2026-02-02 11:44:02.751 251294 DEBUG nova.compute.manager [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Refreshing instance network info cache due to event network-changed-cdbbf387-8780-4514-a372-9c5160d9e694. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:44:02 compute-0 nova_compute[251290]: 2026-02-02 11:44:02.751 251294 DEBUG oslo_concurrency.lockutils [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:44:02 compute-0 nova_compute[251290]: 2026-02-02 11:44:02.752 251294 DEBUG oslo_concurrency.lockutils [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:44:02 compute-0 nova_compute[251290]: 2026-02-02 11:44:02.752 251294 DEBUG nova.network.neutron [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Refreshing network info cache for port cdbbf387-8780-4514-a372-9c5160d9e694 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:44:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 88 KiB/s wr, 16 op/s
Feb 02 11:44:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:03.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:03.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:04 compute-0 ceph-mon[74676]: pgmap v973: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 88 KiB/s wr, 16 op/s
Feb 02 11:44:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 15 KiB/s wr, 2 op/s
Feb 02 11:44:05 compute-0 nova_compute[251290]: 2026-02-02 11:44:05.465 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:05.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:05.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:05 compute-0 nova_compute[251290]: 2026-02-02 11:44:05.831 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:06 compute-0 ceph-mon[74676]: pgmap v974: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 15 KiB/s wr, 2 op/s
Feb 02 11:44:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 31 op/s
Feb 02 11:44:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:06] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb 02 11:44:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:06] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.161 251294 DEBUG nova.network.neutron [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updated VIF entry in instance network info cache for port cdbbf387-8780-4514-a372-9c5160d9e694. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.162 251294 DEBUG nova.network.neutron [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updating instance_info_cache with network_info: [{"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.168 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:07 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:07.168 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:44:07 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:07.169 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:44:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:07.182Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.215 251294 DEBUG oslo_concurrency.lockutils [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.215 251294 DEBUG nova.compute.manager [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.215 251294 DEBUG oslo_concurrency.lockutils [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.216 251294 DEBUG oslo_concurrency.lockutils [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.216 251294 DEBUG oslo_concurrency.lockutils [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.216 251294 DEBUG nova.compute.manager [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] No waiting events found dispatching network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.216 251294 WARNING nova.compute.manager [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received unexpected event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 for instance with vm_state active and task_state None.
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.216 251294 DEBUG nova.compute.manager [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.217 251294 DEBUG oslo_concurrency.lockutils [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.217 251294 DEBUG oslo_concurrency.lockutils [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.217 251294 DEBUG oslo_concurrency.lockutils [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.217 251294 DEBUG nova.compute.manager [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] No waiting events found dispatching network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:44:07 compute-0 nova_compute[251290]: 2026-02-02 11:44:07.217 251294 WARNING nova.compute.manager [req-0994c789-ab01-4dd6-b477-49e0b4b956bf req-17039f9c-1eec-4260-9e9a-98501d78a2bc 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received unexpected event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 for instance with vm_state active and task_state None.
Feb 02 11:44:07 compute-0 sudo[271449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:44:07 compute-0 sudo[271449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:07 compute-0 sudo[271449]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:07.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:07.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:08 compute-0 ceph-mon[74676]: pgmap v975: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 31 op/s
Feb 02 11:44:08 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2108690833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.6 KiB/s wr, 29 op/s
Feb 02 11:44:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:08.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:09.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:09.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:10 compute-0 ceph-mon[74676]: pgmap v976: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.6 KiB/s wr, 29 op/s
Feb 02 11:44:10 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:10.171 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:44:10 compute-0 nova_compute[251290]: 2026-02-02 11:44:10.467 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 6.6 KiB/s wr, 29 op/s
Feb 02 11:44:10 compute-0 nova_compute[251290]: 2026-02-02 11:44:10.833 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:11.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:11.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:12 compute-0 ceph-mon[74676]: pgmap v977: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 6.6 KiB/s wr, 29 op/s
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.466 251294 DEBUG nova.compute.manager [req-dc8d2d04-3d61-4531-9f21-72fca83698be req-c9c9cb8d-1776-4afc-8306-554501947058 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-changed-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.466 251294 DEBUG nova.compute.manager [req-dc8d2d04-3d61-4531-9f21-72fca83698be req-c9c9cb8d-1776-4afc-8306-554501947058 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Refreshing instance network info cache due to event network-changed-cdbbf387-8780-4514-a372-9c5160d9e694. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.467 251294 DEBUG oslo_concurrency.lockutils [req-dc8d2d04-3d61-4531-9f21-72fca83698be req-c9c9cb8d-1776-4afc-8306-554501947058 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.467 251294 DEBUG oslo_concurrency.lockutils [req-dc8d2d04-3d61-4531-9f21-72fca83698be req-c9c9cb8d-1776-4afc-8306-554501947058 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquired lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.467 251294 DEBUG nova.network.neutron [req-dc8d2d04-3d61-4531-9f21-72fca83698be req-c9c9cb8d-1776-4afc-8306-554501947058 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Refreshing network info cache for port cdbbf387-8780-4514-a372-9c5160d9e694 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.546 251294 DEBUG oslo_concurrency.lockutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.547 251294 DEBUG oslo_concurrency.lockutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.547 251294 DEBUG oslo_concurrency.lockutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.547 251294 DEBUG oslo_concurrency.lockutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.548 251294 DEBUG oslo_concurrency.lockutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.549 251294 INFO nova.compute.manager [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Terminating instance
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.550 251294 DEBUG nova.compute.manager [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 11:44:12 compute-0 kernel: tapcdbbf387-87 (unregistering): left promiscuous mode
Feb 02 11:44:12 compute-0 NetworkManager[49067]: <info>  [1770032652.6293] device (tapcdbbf387-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 11:44:12 compute-0 ovn_controller[154901]: 2026-02-02T11:44:12Z|00095|binding|INFO|Releasing lport cdbbf387-8780-4514-a372-9c5160d9e694 from this chassis (sb_readonly=0)
Feb 02 11:44:12 compute-0 ovn_controller[154901]: 2026-02-02T11:44:12Z|00096|binding|INFO|Setting lport cdbbf387-8780-4514-a372-9c5160d9e694 down in Southbound
Feb 02 11:44:12 compute-0 ovn_controller[154901]: 2026-02-02T11:44:12Z|00097|binding|INFO|Removing iface tapcdbbf387-87 ovn-installed in OVS
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.641 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.650 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:9e:2c 10.100.0.14'], port_security=['fa:16:3e:f8:9e:2c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '336c40ec-af53-4724-8c01-2cec821a49f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48206d91-8044-421b-87db-54dbeb1ce4a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3240aa599bd249a3b72e42fcc47af557', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f04aacc5-0ff4-4573-bbb3-5840590a43e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84348474-ef23-4e8d-9a74-21bd4ad8f865, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>], logical_port=cdbbf387-8780-4514-a372-9c5160d9e694) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6ed75498e0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.652 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.652 165304 INFO neutron.agent.ovn.metadata.agent [-] Port cdbbf387-8780-4514-a372-9c5160d9e694 in datapath 48206d91-8044-421b-87db-54dbeb1ce4a4 unbound from our chassis
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.654 165304 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48206d91-8044-421b-87db-54dbeb1ce4a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.656 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[de174c08-8b24-43c0-a43d-516e7ffad499]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.656 165304 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4 namespace which is not needed anymore
Feb 02 11:44:12 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Feb 02 11:44:12 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000b.scope: Consumed 14.148s CPU time.
Feb 02 11:44:12 compute-0 systemd-machined[218018]: Machine qemu-5-instance-0000000b terminated.
Feb 02 11:44:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.6 KiB/s wr, 29 op/s
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.794 251294 INFO nova.virt.libvirt.driver [-] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Instance destroyed successfully.
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.795 251294 DEBUG nova.objects.instance [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lazy-loading 'resources' on Instance uuid 336c40ec-af53-4724-8c01-2cec821a49f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 11:44:12 compute-0 neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4[270435]: [NOTICE]   (270439) : haproxy version is 2.8.14-c23fe91
Feb 02 11:44:12 compute-0 neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4[270435]: [NOTICE]   (270439) : path to executable is /usr/sbin/haproxy
Feb 02 11:44:12 compute-0 neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4[270435]: [WARNING]  (270439) : Exiting Master process...
Feb 02 11:44:12 compute-0 neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4[270435]: [WARNING]  (270439) : Exiting Master process...
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.814 251294 DEBUG nova.virt.libvirt.vif [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:42:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-640321398',display_name='tempest-TestNetworkBasicOps-server-640321398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-640321398',id=11,image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPKrY/FdowAFVklD/wXNvtBo9XHL36YOm0NhB6ClNMLwm16hZQnK9nS2yj5JzsZjBPQ4VmPqcFH9YEh8Lga2thc5KERzORTkCZDm1xZtV4CAwrPf0SkJpTQ6TioHUEA/3g==',key_name='tempest-TestNetworkBasicOps-2123245503',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:43:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3240aa599bd249a3b72e42fcc47af557',ramdisk_id='',reservation_id='r-lvum5ybr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8a4b36bd-584f-4a0a-aab3-55c0b12d2d97',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-571256976',owner_user_name='tempest-TestNetworkBasicOps-571256976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:43:12Z,user_data=None,user_id='abee87546a344ef285e2e269d2c74792',uuid=336c40ec-af53-4724-8c01-2cec821a49f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 11:44:12 compute-0 neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4[270435]: [ALERT]    (270439) : Current worker (270441) exited with code 143 (Terminated)
Feb 02 11:44:12 compute-0 neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4[270435]: [WARNING]  (270439) : All workers exited. Exiting... (0)
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.814 251294 DEBUG nova.network.os_vif_util [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converting VIF {"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.815 251294 DEBUG nova.network.os_vif_util [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:9e:2c,bridge_name='br-int',has_traffic_filtering=True,id=cdbbf387-8780-4514-a372-9c5160d9e694,network=Network(48206d91-8044-421b-87db-54dbeb1ce4a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdbbf387-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.815 251294 DEBUG os_vif [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:9e:2c,bridge_name='br-int',has_traffic_filtering=True,id=cdbbf387-8780-4514-a372-9c5160d9e694,network=Network(48206d91-8044-421b-87db-54dbeb1ce4a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdbbf387-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 11:44:12 compute-0 systemd[1]: libpod-606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a.scope: Deactivated successfully.
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.817 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.817 251294 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdbbf387-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.820 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.823 251294 INFO os_vif [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:9e:2c,bridge_name='br-int',has_traffic_filtering=True,id=cdbbf387-8780-4514-a372-9c5160d9e694,network=Network(48206d91-8044-421b-87db-54dbeb1ce4a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdbbf387-87')
Feb 02 11:44:12 compute-0 podman[271504]: 2026-02-02 11:44:12.824661573 +0000 UTC m=+0.064288374 container died 606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a-userdata-shm.mount: Deactivated successfully.
Feb 02 11:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fbb56cae36ee2135700c62e37075493dcd4fa9fb282f7ff5bdd844dfe01b900-merged.mount: Deactivated successfully.
Feb 02 11:44:12 compute-0 podman[271504]: 2026-02-02 11:44:12.867212013 +0000 UTC m=+0.106838814 container cleanup 606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:44:12 compute-0 systemd[1]: libpod-conmon-606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a.scope: Deactivated successfully.
Feb 02 11:44:12 compute-0 podman[271561]: 2026-02-02 11:44:12.933899155 +0000 UTC m=+0.045857916 container remove 606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.938 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[84df0ead-29f7-4600-9259-b0de55d8df7f]: (4, ('Mon Feb  2 11:44:12 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4 (606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a)\n606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a\nMon Feb  2 11:44:12 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4 (606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a)\n606d09ae1798845ba61110c9e739b860eca5a354052125e0f9f50d99489eef8a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.940 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[9404e938-8435-4046-a4c3-bd662b8b54bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.941 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48206d91-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:44:12 compute-0 kernel: tap48206d91-80: left promiscuous mode
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.944 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:12 compute-0 nova_compute[251290]: 2026-02-02 11:44:12.949 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.952 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[6077dd7e-f609-45f7-8f4b-be13adf158f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.967 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[f79433ed-57af-4dbb-9952-ba43007c0b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.968 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[174667c0-4130-4f68-bb2e-73ec78d80e95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.982 258380 DEBUG oslo.privsep.daemon [-] privsep: reply[2bed0bf9-3023-4ba6-9b88-994c040f188c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424586, 'reachable_time': 28661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271576, 'error': None, 'target': 'ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.984 165875 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48206d91-8044-421b-87db-54dbeb1ce4a4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 11:44:12 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:12.985 165875 DEBUG oslo.privsep.daemon [-] privsep: reply[f4791165-8ede-49bc-b4b4-02a19cd756ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 11:44:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d48206d91\x2d8044\x2d421b\x2d87db\x2d54dbeb1ce4a4.mount: Deactivated successfully.
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.026 251294 DEBUG nova.compute.manager [req-244e6b84-a504-487d-acea-818c7450129a req-dd3d2c7e-99a8-4adb-8bf2-1938c99c456b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-vif-unplugged-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.027 251294 DEBUG oslo_concurrency.lockutils [req-244e6b84-a504-487d-acea-818c7450129a req-dd3d2c7e-99a8-4adb-8bf2-1938c99c456b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.027 251294 DEBUG oslo_concurrency.lockutils [req-244e6b84-a504-487d-acea-818c7450129a req-dd3d2c7e-99a8-4adb-8bf2-1938c99c456b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.027 251294 DEBUG oslo_concurrency.lockutils [req-244e6b84-a504-487d-acea-818c7450129a req-dd3d2c7e-99a8-4adb-8bf2-1938c99c456b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.028 251294 DEBUG nova.compute.manager [req-244e6b84-a504-487d-acea-818c7450129a req-dd3d2c7e-99a8-4adb-8bf2-1938c99c456b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] No waiting events found dispatching network-vif-unplugged-cdbbf387-8780-4514-a372-9c5160d9e694 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.028 251294 DEBUG nova.compute.manager [req-244e6b84-a504-487d-acea-818c7450129a req-dd3d2c7e-99a8-4adb-8bf2-1938c99c456b 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-vif-unplugged-cdbbf387-8780-4514-a372-9c5160d9e694 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.327 251294 INFO nova.virt.libvirt.driver [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Deleting instance files /var/lib/nova/instances/336c40ec-af53-4724-8c01-2cec821a49f3_del
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.329 251294 INFO nova.virt.libvirt.driver [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Deletion of /var/lib/nova/instances/336c40ec-af53-4724-8c01-2cec821a49f3_del complete
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.404 251294 INFO nova.compute.manager [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Took 0.85 seconds to destroy the instance on the hypervisor.
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.405 251294 DEBUG oslo.service.loopingcall [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.405 251294 DEBUG nova.compute.manager [-] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 11:44:13 compute-0 nova_compute[251290]: 2026-02-02 11:44:13.405 251294 DEBUG nova.network.neutron [-] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 11:44:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.003000087s ======
Feb 02 11:44:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000087s
Feb 02 11:44:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:13.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:14 compute-0 ceph-mon[74676]: pgmap v978: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.6 KiB/s wr, 29 op/s
Feb 02 11:44:14 compute-0 nova_compute[251290]: 2026-02-02 11:44:14.592 251294 DEBUG nova.network.neutron [req-dc8d2d04-3d61-4531-9f21-72fca83698be req-c9c9cb8d-1776-4afc-8306-554501947058 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updated VIF entry in instance network info cache for port cdbbf387-8780-4514-a372-9c5160d9e694. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 11:44:14 compute-0 nova_compute[251290]: 2026-02-02 11:44:14.593 251294 DEBUG nova.network.neutron [req-dc8d2d04-3d61-4531-9f21-72fca83698be req-c9c9cb8d-1776-4afc-8306-554501947058 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updating instance_info_cache with network_info: [{"id": "cdbbf387-8780-4514-a372-9c5160d9e694", "address": "fa:16:3e:f8:9e:2c", "network": {"id": "48206d91-8044-421b-87db-54dbeb1ce4a4", "bridge": "br-int", "label": "tempest-network-smoke--1708931351", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3240aa599bd249a3b72e42fcc47af557", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdbbf387-87", "ovs_interfaceid": "cdbbf387-8780-4514-a372-9c5160d9e694", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:44:14 compute-0 nova_compute[251290]: 2026-02-02 11:44:14.599 251294 DEBUG nova.network.neutron [-] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 11:44:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:44:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:44:14 compute-0 nova_compute[251290]: 2026-02-02 11:44:14.633 251294 DEBUG oslo_concurrency.lockutils [req-dc8d2d04-3d61-4531-9f21-72fca83698be req-c9c9cb8d-1776-4afc-8306-554501947058 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Releasing lock "refresh_cache-336c40ec-af53-4724-8c01-2cec821a49f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 11:44:14 compute-0 nova_compute[251290]: 2026-02-02 11:44:14.645 251294 INFO nova.compute.manager [-] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Took 1.24 seconds to deallocate network for instance.
Feb 02 11:44:14 compute-0 nova_compute[251290]: 2026-02-02 11:44:14.727 251294 DEBUG oslo_concurrency.lockutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:14 compute-0 nova_compute[251290]: 2026-02-02 11:44:14.727 251294 DEBUG oslo_concurrency.lockutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:14 compute-0 nova_compute[251290]: 2026-02-02 11:44:14.735 251294 DEBUG nova.compute.manager [req-19d38a49-9ce7-433b-9731-5b12de1eefa0 req-4960af9d-9832-451a-8e79-a5410e28166e 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-vif-deleted-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:44:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.6 KiB/s wr, 29 op/s
Feb 02 11:44:14 compute-0 nova_compute[251290]: 2026-02-02 11:44:14.808 251294 DEBUG oslo_concurrency.processutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.126 251294 DEBUG nova.compute.manager [req-995eb13d-e7f8-49e2-ae27-73d2bc56d9d4 req-975a398f-3dcf-4dac-95fc-72d779a85da2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.127 251294 DEBUG oslo_concurrency.lockutils [req-995eb13d-e7f8-49e2-ae27-73d2bc56d9d4 req-975a398f-3dcf-4dac-95fc-72d779a85da2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Acquiring lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.127 251294 DEBUG oslo_concurrency.lockutils [req-995eb13d-e7f8-49e2-ae27-73d2bc56d9d4 req-975a398f-3dcf-4dac-95fc-72d779a85da2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.127 251294 DEBUG oslo_concurrency.lockutils [req-995eb13d-e7f8-49e2-ae27-73d2bc56d9d4 req-975a398f-3dcf-4dac-95fc-72d779a85da2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.128 251294 DEBUG nova.compute.manager [req-995eb13d-e7f8-49e2-ae27-73d2bc56d9d4 req-975a398f-3dcf-4dac-95fc-72d779a85da2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] No waiting events found dispatching network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.128 251294 WARNING nova.compute.manager [req-995eb13d-e7f8-49e2-ae27-73d2bc56d9d4 req-975a398f-3dcf-4dac-95fc-72d779a85da2 98302b895b6f4130b93959bc3eaf5a88 9d750e6162e4470ba285d5c66a090fad - - default default] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Received unexpected event network-vif-plugged-cdbbf387-8780-4514-a372-9c5160d9e694 for instance with vm_state deleted and task_state None.
Feb 02 11:44:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:44:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:44:15 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086297629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.292 251294 DEBUG oslo_concurrency.processutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.298 251294 DEBUG nova.compute.provider_tree [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.320 251294 DEBUG nova.scheduler.client.report [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.345 251294 DEBUG oslo_concurrency.lockutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.389 251294 INFO nova.scheduler.client.report [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Deleted allocations for instance 336c40ec-af53-4724-8c01-2cec821a49f3
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.473 251294 DEBUG oslo_concurrency.lockutils [None req-4c03670e-6a23-4c7d-8560-23263a5f7e27 abee87546a344ef285e2e269d2c74792 3240aa599bd249a3b72e42fcc47af557 - - default default] Lock "336c40ec-af53-4724-8c01-2cec821a49f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:15.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:15.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:15 compute-0 nova_compute[251290]: 2026-02-02 11:44:15.837 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:16 compute-0 ceph-mon[74676]: pgmap v979: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.6 KiB/s wr, 29 op/s
Feb 02 11:44:16 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3086297629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.7 KiB/s wr, 57 op/s
Feb 02 11:44:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:16] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Feb 02 11:44:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:16] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Feb 02 11:44:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:17.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:17.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:17 compute-0 nova_compute[251290]: 2026-02-02 11:44:17.820 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:18 compute-0 ceph-mon[74676]: pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.7 KiB/s wr, 57 op/s
Feb 02 11:44:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Feb 02 11:44:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:18.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:44:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:18.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:44:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:18.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:44:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:19.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:20 compute-0 ceph-mon[74676]: pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Feb 02 11:44:20 compute-0 nova_compute[251290]: 2026-02-02 11:44:20.456 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:20 compute-0 nova_compute[251290]: 2026-02-02 11:44:20.486 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Feb 02 11:44:20 compute-0 nova_compute[251290]: 2026-02-02 11:44:20.838 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:21.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:21.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:22 compute-0 podman[271610]: 2026-02-02 11:44:22.292076163 +0000 UTC m=+0.081987322 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 02 11:44:22 compute-0 podman[271611]: 2026-02-02 11:44:22.294113681 +0000 UTC m=+0.080541360 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 02 11:44:22 compute-0 ceph-mon[74676]: pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Feb 02 11:44:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:22.683 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:22.683 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:44:22.684 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Feb 02 11:44:22 compute-0 nova_compute[251290]: 2026-02-02 11:44:22.821 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:23.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:23.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:24 compute-0 ceph-mon[74676]: pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Feb 02 11:44:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 11:44:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:25.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:25 compute-0 nova_compute[251290]: 2026-02-02 11:44:25.839 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:26 compute-0 ceph-mon[74676]: pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 11:44:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Feb 02 11:44:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:26] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Feb 02 11:44:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:26] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Feb 02 11:44:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:27.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:27 compute-0 sudo[271662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:44:27 compute-0 sudo[271662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:27 compute-0 sudo[271662]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:27.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:27 compute-0 nova_compute[251290]: 2026-02-02 11:44:27.791 251294 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770032652.7861073, 336c40ec-af53-4724-8c01-2cec821a49f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 11:44:27 compute-0 nova_compute[251290]: 2026-02-02 11:44:27.791 251294 INFO nova.compute.manager [-] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] VM Stopped (Lifecycle Event)
Feb 02 11:44:27 compute-0 nova_compute[251290]: 2026-02-02 11:44:27.815 251294 DEBUG nova.compute.manager [None req-8c6ef622-1864-49df-971a-79806a84744b - - - - - -] [instance: 336c40ec-af53-4724-8c01-2cec821a49f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 11:44:27 compute-0 nova_compute[251290]: 2026-02-02 11:44:27.824 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:27.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:28 compute-0 ceph-mon[74676]: pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Feb 02 11:44:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1011 B/s rd, 0 op/s
Feb 02 11:44:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:28.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:44:29
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'vms', '.nfs', 'default.rgw.control', 'default.rgw.log', '.mgr', 'backups']
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:44:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:44:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:44:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:44:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:29.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:44:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:29.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:44:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:44:30 compute-0 ceph-mon[74676]: pgmap v986: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1011 B/s rd, 0 op/s
Feb 02 11:44:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:44:30 compute-0 nova_compute[251290]: 2026-02-02 11:44:30.841 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:44:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:31 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Check health
Feb 02 11:44:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:44:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:31.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:44:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:31.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:32 compute-0 ceph-mon[74676]: pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:44:32 compute-0 nova_compute[251290]: 2026-02-02 11:44:32.826 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:44:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:33.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:33.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:34 compute-0 ceph-mon[74676]: pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:44:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1011 B/s rd, 0 op/s
Feb 02 11:44:35 compute-0 nova_compute[251290]: 2026-02-02 11:44:35.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/940327363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:35.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:35.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:35 compute-0 nova_compute[251290]: 2026-02-02 11:44:35.843 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.042 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.042 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.042 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.043 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.043 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:44:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:44:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/942340737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.522 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:44:36 compute-0 ceph-mon[74676]: pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1011 B/s rd, 0 op/s
Feb 02 11:44:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1977661173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2922643130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/942340737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.704 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.705 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4566MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.705 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.705 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:44:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.844 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.845 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:44:36 compute-0 nova_compute[251290]: 2026-02-02 11:44:36.923 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:44:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:44:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:36] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:44:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:36] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:44:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:37.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:44:37 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803854540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:37 compute-0 nova_compute[251290]: 2026-02-02 11:44:37.407 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:44:37 compute-0 nova_compute[251290]: 2026-02-02 11:44:37.412 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:44:37 compute-0 nova_compute[251290]: 2026-02-02 11:44:37.438 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:44:37 compute-0 nova_compute[251290]: 2026-02-02 11:44:37.474 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:44:37 compute-0 nova_compute[251290]: 2026-02-02 11:44:37.475 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:44:37 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1347247021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:37 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3803854540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:37.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:37 compute-0 nova_compute[251290]: 2026-02-02 11:44:37.828 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:37.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:38 compute-0 nova_compute[251290]: 2026-02-02 11:44:38.476 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:38 compute-0 nova_compute[251290]: 2026-02-02 11:44:38.477 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:38 compute-0 ceph-mon[74676]: pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:44:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:38.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1011 B/s rd, 0 op/s
Feb 02 11:44:39 compute-0 nova_compute[251290]: 2026-02-02 11:44:39.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:44:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:39.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:44:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:39.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:40 compute-0 nova_compute[251290]: 2026-02-02 11:44:40.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:40 compute-0 ceph-mon[74676]: pgmap v991: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1011 B/s rd, 0 op/s
Feb 02 11:44:40 compute-0 nova_compute[251290]: 2026-02-02 11:44:40.845 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:44:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:41 compute-0 nova_compute[251290]: 2026-02-02 11:44:41.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:41 compute-0 nova_compute[251290]: 2026-02-02 11:44:41.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:44:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:41.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:41 compute-0 ceph-mon[74676]: pgmap v992: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:44:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/732721710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:44:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:41.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:42 compute-0 nova_compute[251290]: 2026-02-02 11:44:42.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:42 compute-0 nova_compute[251290]: 2026-02-02 11:44:42.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:42 compute-0 nova_compute[251290]: 2026-02-02 11:44:42.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 11:44:42 compute-0 nova_compute[251290]: 2026-02-02 11:44:42.046 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 11:44:42 compute-0 nova_compute[251290]: 2026-02-02 11:44:42.830 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:44:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:44:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:43.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:44:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:43.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:44:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/448505042' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:44:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:44:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/448505042' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:44:43 compute-0 ceph-mon[74676]: pgmap v993: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:44:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/448505042' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:44:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/448505042' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:44:44 compute-0 nova_compute[251290]: 2026-02-02 11:44:44.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:44 compute-0 nova_compute[251290]: 2026-02-02 11:44:44.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:44:44 compute-0 nova_compute[251290]: 2026-02-02 11:44:44.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:44:44 compute-0 nova_compute[251290]: 2026-02-02 11:44:44.035 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:44:44 compute-0 nova_compute[251290]: 2026-02-02 11:44:44.035 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:44 compute-0 nova_compute[251290]: 2026-02-02 11:44:44.035 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 11:44:44 compute-0 nova_compute[251290]: 2026-02-02 11:44:44.046 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:44:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:44:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:44:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:44:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:44:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:45.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:45 compute-0 nova_compute[251290]: 2026-02-02 11:44:45.848 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:45.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:46 compute-0 ceph-mon[74676]: pgmap v994: 353 pgs: 353 active+clean; 41 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:44:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:44:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:46] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:44:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:46] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:44:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4199184244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:44:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/542981487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb 02 11:44:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:47.187Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:47 compute-0 sudo[271753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:44:47 compute-0 sudo[271753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:47 compute-0 sudo[271753]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:47.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:47 compute-0 nova_compute[251290]: 2026-02-02 11:44:47.832 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:47.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=cleanup t=2026-02-02T11:44:47.980545235Z level=info msg="Completed cleanup jobs" duration=8.092832ms
Feb 02 11:44:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=plugins.update.checker t=2026-02-02T11:44:48.098248958Z level=info msg="Update check succeeded" duration=59.555177ms
Feb 02 11:44:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=grafana.update.checker t=2026-02-02T11:44:48.100431901Z level=info msg="Update check succeeded" duration=64.932511ms
Feb 02 11:44:48 compute-0 ceph-mon[74676]: pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:44:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:48.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:44:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:49.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:49.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:50 compute-0 ceph-mon[74676]: pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 02 11:44:50 compute-0 nova_compute[251290]: 2026-02-02 11:44:50.849 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:44:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:51.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:51.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:52 compute-0 ceph-mon[74676]: pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 02 11:44:52 compute-0 nova_compute[251290]: 2026-02-02 11:44:52.833 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 02 11:44:53 compute-0 sudo[271801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:44:53 compute-0 sudo[271801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:53 compute-0 sudo[271801]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:53 compute-0 podman[271784]: 2026-02-02 11:44:53.292944332 +0000 UTC m=+0.081948740 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:44:53 compute-0 podman[271785]: 2026-02-02 11:44:53.301143797 +0000 UTC m=+0.086769408 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 02 11:44:53 compute-0 sudo[271853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:44:53 compute-0 sudo[271853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:53.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:53 compute-0 sudo[271853]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:53.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:53 compute-0 sudo[271914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:44:53 compute-0 sudo[271914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:53 compute-0 sudo[271914]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:54 compute-0 sudo[271939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Feb 02 11:44:54 compute-0 sudo[271939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:44:54 compute-0 sudo[271939]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:44:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:44:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:44:54 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:44:54 compute-0 sudo[271983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:44:54 compute-0 sudo[271983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:54 compute-0 sudo[271983]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:54 compute-0 sudo[272008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:44:54 compute-0 sudo[272008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:54 compute-0 podman[272074]: 2026-02-02 11:44:54.951185175 +0000 UTC m=+0.038054291 container create f4f932357565f78c95dbb8d70072b88fe589756219220ea1b0310ee3f5095324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_ride, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:44:54 compute-0 systemd[1]: Started libpod-conmon-f4f932357565f78c95dbb8d70072b88fe589756219220ea1b0310ee3f5095324.scope.
Feb 02 11:44:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:44:55 compute-0 podman[272074]: 2026-02-02 11:44:54.932911072 +0000 UTC m=+0.019780208 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:44:55 compute-0 podman[272074]: 2026-02-02 11:44:55.031052815 +0000 UTC m=+0.117921951 container init f4f932357565f78c95dbb8d70072b88fe589756219220ea1b0310ee3f5095324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_ride, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb 02 11:44:55 compute-0 podman[272074]: 2026-02-02 11:44:55.036970974 +0000 UTC m=+0.123840090 container start f4f932357565f78c95dbb8d70072b88fe589756219220ea1b0310ee3f5095324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Feb 02 11:44:55 compute-0 podman[272074]: 2026-02-02 11:44:55.040216087 +0000 UTC m=+0.127085583 container attach f4f932357565f78c95dbb8d70072b88fe589756219220ea1b0310ee3f5095324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb 02 11:44:55 compute-0 compassionate_ride[272091]: 167 167
Feb 02 11:44:55 compute-0 systemd[1]: libpod-f4f932357565f78c95dbb8d70072b88fe589756219220ea1b0310ee3f5095324.scope: Deactivated successfully.
Feb 02 11:44:55 compute-0 conmon[272091]: conmon f4f932357565f78c95db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4f932357565f78c95dbb8d70072b88fe589756219220ea1b0310ee3f5095324.scope/container/memory.events
Feb 02 11:44:55 compute-0 podman[272074]: 2026-02-02 11:44:55.044986624 +0000 UTC m=+0.131855740 container died f4f932357565f78c95dbb8d70072b88fe589756219220ea1b0310ee3f5095324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_ride, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b5df5160b37a7445cd3e0b4117a07f5f5bfdf18a515ef4b06cc73d21cfd451f-merged.mount: Deactivated successfully.
Feb 02 11:44:55 compute-0 podman[272074]: 2026-02-02 11:44:55.090433427 +0000 UTC m=+0.177302543 container remove f4f932357565f78c95dbb8d70072b88fe589756219220ea1b0310ee3f5095324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_ride, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:44:55 compute-0 systemd[1]: libpod-conmon-f4f932357565f78c95dbb8d70072b88fe589756219220ea1b0310ee3f5095324.scope: Deactivated successfully.
Feb 02 11:44:55 compute-0 podman[272114]: 2026-02-02 11:44:55.234202488 +0000 UTC m=+0.042476899 container create 615dac3eaa6a8b47736aa77540a87a051a5476413978b5d3c09cc953faa15fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb 02 11:44:55 compute-0 systemd[1]: Started libpod-conmon-615dac3eaa6a8b47736aa77540a87a051a5476413978b5d3c09cc953faa15fb5.scope.
Feb 02 11:44:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a1d430fb2a3e42ad3079e4776d2c9b2633a2e1d49551b751ea2284252cc8a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a1d430fb2a3e42ad3079e4776d2c9b2633a2e1d49551b751ea2284252cc8a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a1d430fb2a3e42ad3079e4776d2c9b2633a2e1d49551b751ea2284252cc8a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a1d430fb2a3e42ad3079e4776d2c9b2633a2e1d49551b751ea2284252cc8a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a1d430fb2a3e42ad3079e4776d2c9b2633a2e1d49551b751ea2284252cc8a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:55 compute-0 podman[272114]: 2026-02-02 11:44:55.218254301 +0000 UTC m=+0.026528732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:44:55 compute-0 podman[272114]: 2026-02-02 11:44:55.320866562 +0000 UTC m=+0.129141003 container init 615dac3eaa6a8b47736aa77540a87a051a5476413978b5d3c09cc953faa15fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:44:55 compute-0 podman[272114]: 2026-02-02 11:44:55.327302027 +0000 UTC m=+0.135576438 container start 615dac3eaa6a8b47736aa77540a87a051a5476413978b5d3c09cc953faa15fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:44:55 compute-0 ceph-mon[74676]: pgmap v999: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:44:55 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:44:55 compute-0 podman[272114]: 2026-02-02 11:44:55.331364763 +0000 UTC m=+0.139639204 container attach 615dac3eaa6a8b47736aa77540a87a051a5476413978b5d3c09cc953faa15fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:44:55 compute-0 modest_feistel[272131]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:44:55 compute-0 modest_feistel[272131]: --> All data devices are unavailable
Feb 02 11:44:55 compute-0 systemd[1]: libpod-615dac3eaa6a8b47736aa77540a87a051a5476413978b5d3c09cc953faa15fb5.scope: Deactivated successfully.
Feb 02 11:44:55 compute-0 podman[272114]: 2026-02-02 11:44:55.657404509 +0000 UTC m=+0.465678920 container died 615dac3eaa6a8b47736aa77540a87a051a5476413978b5d3c09cc953faa15fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Feb 02 11:44:55 compute-0 ovn_controller[154901]: 2026-02-02T11:44:55Z|00098|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Feb 02 11:44:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:55.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-41a1d430fb2a3e42ad3079e4776d2c9b2633a2e1d49551b751ea2284252cc8a8-merged.mount: Deactivated successfully.
Feb 02 11:44:55 compute-0 podman[272114]: 2026-02-02 11:44:55.699932488 +0000 UTC m=+0.508206929 container remove 615dac3eaa6a8b47736aa77540a87a051a5476413978b5d3c09cc953faa15fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:44:55 compute-0 systemd[1]: libpod-conmon-615dac3eaa6a8b47736aa77540a87a051a5476413978b5d3c09cc953faa15fb5.scope: Deactivated successfully.
Feb 02 11:44:55 compute-0 sudo[272008]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:55 compute-0 sudo[272159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:44:55 compute-0 sudo[272159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:55 compute-0 sudo[272159]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:55 compute-0 nova_compute[251290]: 2026-02-02 11:44:55.853 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:44:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:55.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:44:55 compute-0 sudo[272184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:44:55 compute-0 sudo[272184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:44:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:44:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:44:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:44:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:44:56 compute-0 podman[272249]: 2026-02-02 11:44:56.247601747 +0000 UTC m=+0.036583600 container create caec8961b2fc138045cf25ba78dedb5afe15e5d718078a8131cd4e3247c7b2a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:44:56 compute-0 systemd[1]: Started libpod-conmon-caec8961b2fc138045cf25ba78dedb5afe15e5d718078a8131cd4e3247c7b2a9.scope.
Feb 02 11:44:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:44:56 compute-0 podman[272249]: 2026-02-02 11:44:56.325570152 +0000 UTC m=+0.114552025 container init caec8961b2fc138045cf25ba78dedb5afe15e5d718078a8131cd4e3247c7b2a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:44:56 compute-0 podman[272249]: 2026-02-02 11:44:56.231545027 +0000 UTC m=+0.020526900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:44:56 compute-0 podman[272249]: 2026-02-02 11:44:56.333490189 +0000 UTC m=+0.122472042 container start caec8961b2fc138045cf25ba78dedb5afe15e5d718078a8131cd4e3247c7b2a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:44:56 compute-0 podman[272249]: 2026-02-02 11:44:56.337780282 +0000 UTC m=+0.126762165 container attach caec8961b2fc138045cf25ba78dedb5afe15e5d718078a8131cd4e3247c7b2a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 11:44:56 compute-0 peaceful_swirles[272265]: 167 167
Feb 02 11:44:56 compute-0 systemd[1]: libpod-caec8961b2fc138045cf25ba78dedb5afe15e5d718078a8131cd4e3247c7b2a9.scope: Deactivated successfully.
Feb 02 11:44:56 compute-0 conmon[272265]: conmon caec8961b2fc138045cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-caec8961b2fc138045cf25ba78dedb5afe15e5d718078a8131cd4e3247c7b2a9.scope/container/memory.events
Feb 02 11:44:56 compute-0 podman[272249]: 2026-02-02 11:44:56.341228971 +0000 UTC m=+0.130210824 container died caec8961b2fc138045cf25ba78dedb5afe15e5d718078a8131cd4e3247c7b2a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb 02 11:44:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6037637e88645c8feb1ba7d91d6f981d00c3530e8626b19fd2506b9496688c6-merged.mount: Deactivated successfully.
Feb 02 11:44:56 compute-0 podman[272249]: 2026-02-02 11:44:56.37573513 +0000 UTC m=+0.164716983 container remove caec8961b2fc138045cf25ba78dedb5afe15e5d718078a8131cd4e3247c7b2a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 11:44:56 compute-0 systemd[1]: libpod-conmon-caec8961b2fc138045cf25ba78dedb5afe15e5d718078a8131cd4e3247c7b2a9.scope: Deactivated successfully.
Feb 02 11:44:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 107 op/s
Feb 02 11:44:56 compute-0 podman[272290]: 2026-02-02 11:44:56.518411299 +0000 UTC m=+0.047594246 container create 413320d4db95d9299f83d6d394603f0d672cb0f28c4d82b1032abfa03ea68d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:44:56 compute-0 systemd[1]: Started libpod-conmon-413320d4db95d9299f83d6d394603f0d672cb0f28c4d82b1032abfa03ea68d46.scope.
Feb 02 11:44:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c8af53c67076d2369b1f98a98752c6045ca8b9fc16a79e42a34001d6d921f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c8af53c67076d2369b1f98a98752c6045ca8b9fc16a79e42a34001d6d921f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c8af53c67076d2369b1f98a98752c6045ca8b9fc16a79e42a34001d6d921f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c8af53c67076d2369b1f98a98752c6045ca8b9fc16a79e42a34001d6d921f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:56 compute-0 podman[272290]: 2026-02-02 11:44:56.498681143 +0000 UTC m=+0.027864120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:44:56 compute-0 podman[272290]: 2026-02-02 11:44:56.600046299 +0000 UTC m=+0.129229276 container init 413320d4db95d9299f83d6d394603f0d672cb0f28c4d82b1032abfa03ea68d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:44:56 compute-0 podman[272290]: 2026-02-02 11:44:56.605069463 +0000 UTC m=+0.134252410 container start 413320d4db95d9299f83d6d394603f0d672cb0f28c4d82b1032abfa03ea68d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_merkle, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:44:56 compute-0 podman[272290]: 2026-02-02 11:44:56.608233183 +0000 UTC m=+0.137416130 container attach 413320d4db95d9299f83d6d394603f0d672cb0f28c4d82b1032abfa03ea68d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_merkle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:44:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:44:56 compute-0 practical_merkle[272307]: {
Feb 02 11:44:56 compute-0 practical_merkle[272307]:     "1": [
Feb 02 11:44:56 compute-0 practical_merkle[272307]:         {
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "devices": [
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "/dev/loop3"
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             ],
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "lv_name": "ceph_lv0",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "lv_size": "21470642176",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "name": "ceph_lv0",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "tags": {
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.cluster_name": "ceph",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.crush_device_class": "",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.encrypted": "0",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.osd_id": "1",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.type": "block",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.vdo": "0",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:                 "ceph.with_tpm": "0"
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             },
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "type": "block",
Feb 02 11:44:56 compute-0 practical_merkle[272307]:             "vg_name": "ceph_vg0"
Feb 02 11:44:56 compute-0 practical_merkle[272307]:         }
Feb 02 11:44:56 compute-0 practical_merkle[272307]:     ]
Feb 02 11:44:56 compute-0 practical_merkle[272307]: }
Feb 02 11:44:56 compute-0 systemd[1]: libpod-413320d4db95d9299f83d6d394603f0d672cb0f28c4d82b1032abfa03ea68d46.scope: Deactivated successfully.
Feb 02 11:44:56 compute-0 podman[272290]: 2026-02-02 11:44:56.91790929 +0000 UTC m=+0.447092237 container died 413320d4db95d9299f83d6d394603f0d672cb0f28c4d82b1032abfa03ea68d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:44:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-02c8af53c67076d2369b1f98a98752c6045ca8b9fc16a79e42a34001d6d921f1-merged.mount: Deactivated successfully.
Feb 02 11:44:56 compute-0 podman[272290]: 2026-02-02 11:44:56.959030029 +0000 UTC m=+0.488212976 container remove 413320d4db95d9299f83d6d394603f0d672cb0f28c4d82b1032abfa03ea68d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Feb 02 11:44:56 compute-0 systemd[1]: libpod-conmon-413320d4db95d9299f83d6d394603f0d672cb0f28c4d82b1032abfa03ea68d46.scope: Deactivated successfully.
Feb 02 11:44:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:56] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:44:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:44:56] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:44:57 compute-0 sudo[272184]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:57 compute-0 sudo[272328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:44:57 compute-0 sudo[272328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:57 compute-0 sudo[272328]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:57 compute-0 sudo[272353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:44:57 compute-0 sudo[272353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:57.188Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:44:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:57.188Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:44:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:57.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:57 compute-0 ceph-mon[74676]: pgmap v1000: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 107 op/s
Feb 02 11:44:57 compute-0 podman[272420]: 2026-02-02 11:44:57.52178155 +0000 UTC m=+0.043416005 container create ccbd64e22b99f416be882bf6a72209588cfe52e0c4993a719c9495fb6c40845d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shaw, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:44:57 compute-0 systemd[1]: Started libpod-conmon-ccbd64e22b99f416be882bf6a72209588cfe52e0c4993a719c9495fb6c40845d.scope.
Feb 02 11:44:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:44:57 compute-0 podman[272420]: 2026-02-02 11:44:57.596145122 +0000 UTC m=+0.117779607 container init ccbd64e22b99f416be882bf6a72209588cfe52e0c4993a719c9495fb6c40845d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:44:57 compute-0 podman[272420]: 2026-02-02 11:44:57.504663219 +0000 UTC m=+0.026297704 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:44:57 compute-0 podman[272420]: 2026-02-02 11:44:57.603215184 +0000 UTC m=+0.124849629 container start ccbd64e22b99f416be882bf6a72209588cfe52e0c4993a719c9495fb6c40845d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shaw, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:44:57 compute-0 podman[272420]: 2026-02-02 11:44:57.6072442 +0000 UTC m=+0.128878655 container attach ccbd64e22b99f416be882bf6a72209588cfe52e0c4993a719c9495fb6c40845d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:44:57 compute-0 heuristic_shaw[272437]: 167 167
Feb 02 11:44:57 compute-0 systemd[1]: libpod-ccbd64e22b99f416be882bf6a72209588cfe52e0c4993a719c9495fb6c40845d.scope: Deactivated successfully.
Feb 02 11:44:57 compute-0 conmon[272437]: conmon ccbd64e22b99f416be88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ccbd64e22b99f416be882bf6a72209588cfe52e0c4993a719c9495fb6c40845d.scope/container/memory.events
Feb 02 11:44:57 compute-0 podman[272420]: 2026-02-02 11:44:57.609858195 +0000 UTC m=+0.131492650 container died ccbd64e22b99f416be882bf6a72209588cfe52e0c4993a719c9495fb6c40845d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 11:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6aa821625b11796a98f6c5a088a1686d5cfaeaa52fdec54ad5bd42562f59eba9-merged.mount: Deactivated successfully.
Feb 02 11:44:57 compute-0 podman[272420]: 2026-02-02 11:44:57.651438697 +0000 UTC m=+0.173073152 container remove ccbd64e22b99f416be882bf6a72209588cfe52e0c4993a719c9495fb6c40845d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 11:44:57 compute-0 systemd[1]: libpod-conmon-ccbd64e22b99f416be882bf6a72209588cfe52e0c4993a719c9495fb6c40845d.scope: Deactivated successfully.
Feb 02 11:44:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:57.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:57 compute-0 podman[272463]: 2026-02-02 11:44:57.786163639 +0000 UTC m=+0.042294004 container create 3120768877c6ff324ed903fa4d8d9cdce7c7c8bb0ba2e5338850e8ac5d93c880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:44:57 compute-0 systemd[1]: Started libpod-conmon-3120768877c6ff324ed903fa4d8d9cdce7c7c8bb0ba2e5338850e8ac5d93c880.scope.
Feb 02 11:44:57 compute-0 nova_compute[251290]: 2026-02-02 11:44:57.834 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:44:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ecb4e1e3e7dac7a135a5793f05d57d41d32fcd3a7492f4b58bde44b595fb78a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ecb4e1e3e7dac7a135a5793f05d57d41d32fcd3a7492f4b58bde44b595fb78a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ecb4e1e3e7dac7a135a5793f05d57d41d32fcd3a7492f4b58bde44b595fb78a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ecb4e1e3e7dac7a135a5793f05d57d41d32fcd3a7492f4b58bde44b595fb78a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:44:57 compute-0 podman[272463]: 2026-02-02 11:44:57.767468723 +0000 UTC m=+0.023599108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:44:57 compute-0 podman[272463]: 2026-02-02 11:44:57.870209388 +0000 UTC m=+0.126339783 container init 3120768877c6ff324ed903fa4d8d9cdce7c7c8bb0ba2e5338850e8ac5d93c880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mirzakhani, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:44:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:44:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:57.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:44:57 compute-0 podman[272463]: 2026-02-02 11:44:57.877446915 +0000 UTC m=+0.133577280 container start 3120768877c6ff324ed903fa4d8d9cdce7c7c8bb0ba2e5338850e8ac5d93c880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mirzakhani, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:44:57 compute-0 podman[272463]: 2026-02-02 11:44:57.882937623 +0000 UTC m=+0.139068008 container attach 3120768877c6ff324ed903fa4d8d9cdce7c7c8bb0ba2e5338850e8ac5d93c880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mirzakhani, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:44:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Feb 02 11:44:58 compute-0 lvm[272553]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:44:58 compute-0 lvm[272553]: VG ceph_vg0 finished
Feb 02 11:44:58 compute-0 hopeful_mirzakhani[272479]: {}
Feb 02 11:44:58 compute-0 systemd[1]: libpod-3120768877c6ff324ed903fa4d8d9cdce7c7c8bb0ba2e5338850e8ac5d93c880.scope: Deactivated successfully.
Feb 02 11:44:58 compute-0 systemd[1]: libpod-3120768877c6ff324ed903fa4d8d9cdce7c7c8bb0ba2e5338850e8ac5d93c880.scope: Consumed 1.083s CPU time.
Feb 02 11:44:58 compute-0 podman[272463]: 2026-02-02 11:44:58.600522642 +0000 UTC m=+0.856653037 container died 3120768877c6ff324ed903fa4d8d9cdce7c7c8bb0ba2e5338850e8ac5d93c880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mirzakhani, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ecb4e1e3e7dac7a135a5793f05d57d41d32fcd3a7492f4b58bde44b595fb78a-merged.mount: Deactivated successfully.
Feb 02 11:44:58 compute-0 podman[272463]: 2026-02-02 11:44:58.645424739 +0000 UTC m=+0.901555104 container remove 3120768877c6ff324ed903fa4d8d9cdce7c7c8bb0ba2e5338850e8ac5d93c880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:44:58 compute-0 systemd[1]: libpod-conmon-3120768877c6ff324ed903fa4d8d9cdce7c7c8bb0ba2e5338850e8ac5d93c880.scope: Deactivated successfully.
Feb 02 11:44:58 compute-0 sudo[272353]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:44:58 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:44:58 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:58 compute-0 sudo[272569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:44:58 compute-0 sudo[272569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:44:58 compute-0 sudo[272569]: pam_unix(sudo:session): session closed for user root
Feb 02 11:44:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:44:58.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:44:59 compute-0 ceph-mon[74676]: pgmap v1001: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Feb 02 11:44:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:44:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:44:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:44:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:44:59.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:44:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:44:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:44:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:44:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:44:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:44:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:44:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:44:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:44:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:44:59.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Feb 02 11:45:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:45:00 compute-0 nova_compute[251290]: 2026-02-02 11:45:00.856 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:01 compute-0 ceph-mon[74676]: pgmap v1002: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Feb 02 11:45:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:01.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:01.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Feb 02 11:45:02 compute-0 nova_compute[251290]: 2026-02-02 11:45:02.838 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:03 compute-0 ceph-mon[74676]: pgmap v1003: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Feb 02 11:45:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:03.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:45:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:03.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:45:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Feb 02 11:45:05 compute-0 ceph-mon[74676]: pgmap v1004: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Feb 02 11:45:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:05.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:05 compute-0 nova_compute[251290]: 2026-02-02 11:45:05.857 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:05.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 02 11:45:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:06] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Feb 02 11:45:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:06] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Feb 02 11:45:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:07.190Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:45:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:07.190Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:45:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:07.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:07 compute-0 sudo[272603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:45:07 compute-0 sudo[272603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:45:07 compute-0 sudo[272603]: pam_unix(sudo:session): session closed for user root
Feb 02 11:45:07 compute-0 ceph-mon[74676]: pgmap v1005: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 02 11:45:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:45:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:07.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:45:07 compute-0 nova_compute[251290]: 2026-02-02 11:45:07.839 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:07.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 02 11:45:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:08.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:09 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:45:09.277 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:45:09 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:45:09.278 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:45:09 compute-0 nova_compute[251290]: 2026-02-02 11:45:09.278 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:09 compute-0 ceph-mon[74676]: pgmap v1006: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.612454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032709612535, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1181, "num_deletes": 503, "total_data_size": 1610088, "memory_usage": 1632680, "flush_reason": "Manual Compaction"}
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032709621257, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1214117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28279, "largest_seqno": 29459, "table_properties": {"data_size": 1209289, "index_size": 1840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14900, "raw_average_key_size": 19, "raw_value_size": 1197329, "raw_average_value_size": 1585, "num_data_blocks": 79, "num_entries": 755, "num_filter_entries": 755, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032637, "oldest_key_time": 1770032637, "file_creation_time": 1770032709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 8847 microseconds, and 3499 cpu microseconds.
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.621319) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1214117 bytes OK
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.621348) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.628363) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.628391) EVENT_LOG_v1 {"time_micros": 1770032709628385, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.628414) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1603610, prev total WAL file size 1603610, number of live WAL files 2.
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.629118) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1185KB)], [62(16MB)]
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032709629163, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 18119507, "oldest_snapshot_seqno": -1}
Feb 02 11:45:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:09.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5802 keys, 12253878 bytes, temperature: kUnknown
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032709721573, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12253878, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12217330, "index_size": 20950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 150141, "raw_average_key_size": 25, "raw_value_size": 12114717, "raw_average_value_size": 2088, "num_data_blocks": 838, "num_entries": 5802, "num_filter_entries": 5802, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770032709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.721937) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12253878 bytes
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.723708) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.8 rd, 132.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 16.1 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(25.0) write-amplify(10.1) OK, records in: 6805, records dropped: 1003 output_compression: NoCompression
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.723734) EVENT_LOG_v1 {"time_micros": 1770032709723721, "job": 34, "event": "compaction_finished", "compaction_time_micros": 92537, "compaction_time_cpu_micros": 25417, "output_level": 6, "num_output_files": 1, "total_output_size": 12253878, "num_input_records": 6805, "num_output_records": 5802, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032709724055, "job": 34, "event": "table_file_deletion", "file_number": 64}
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032709726275, "job": 34, "event": "table_file_deletion", "file_number": 62}
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.628981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.726306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.726312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.726314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.726316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:45:09 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:45:09.726317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:45:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:45:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:09.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:45:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 02 11:45:10 compute-0 nova_compute[251290]: 2026-02-02 11:45:10.859 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:11 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 02 11:45:11 compute-0 ceph-mon[74676]: pgmap v1007: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 02 11:45:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:11.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:11.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 02 11:45:12 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/400020348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:45:12 compute-0 nova_compute[251290]: 2026-02-02 11:45:12.841 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:13 compute-0 ceph-mon[74676]: pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 02 11:45:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:13.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:13.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Feb 02 11:45:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:45:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:45:14 compute-0 ceph-mon[74676]: pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Feb 02 11:45:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:45:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:15.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:15 compute-0 nova_compute[251290]: 2026-02-02 11:45:15.861 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:15.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:16 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:45:16.280 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:45:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Feb 02 11:45:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:16] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:45:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:16] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:45:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:17.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:17 compute-0 ceph-mon[74676]: pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Feb 02 11:45:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:17.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:17 compute-0 nova_compute[251290]: 2026-02-02 11:45:17.845 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:17.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 11:45:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:18.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:19 compute-0 ceph-mon[74676]: pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 11:45:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:19.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:45:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:19.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:45:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 11:45:20 compute-0 nova_compute[251290]: 2026-02-02 11:45:20.863 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:21 compute-0 ceph-mon[74676]: pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 11:45:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:21.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:45:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:21.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:45:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Feb 02 11:45:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:45:22.683 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:45:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:45:22.684 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:45:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:45:22.684 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:45:22 compute-0 nova_compute[251290]: 2026-02-02 11:45:22.849 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:23 compute-0 ceph-mon[74676]: pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Feb 02 11:45:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:23.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:23.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:24 compute-0 podman[272646]: 2026-02-02 11:45:24.296750122 +0000 UTC m=+0.081050215 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 02 11:45:24 compute-0 podman[272647]: 2026-02-02 11:45:24.301391155 +0000 UTC m=+0.084984797 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Feb 02 11:45:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:25 compute-0 ceph-mon[74676]: pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:25.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:25 compute-0 nova_compute[251290]: 2026-02-02 11:45:25.866 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:25.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:26] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:45:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:26] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:45:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:27.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:27 compute-0 sudo[272694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:45:27 compute-0 sudo[272694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:45:27 compute-0 sudo[272694]: pam_unix(sudo:session): session closed for user root
Feb 02 11:45:27 compute-0 ceph-mon[74676]: pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:27.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:27 compute-0 nova_compute[251290]: 2026-02-02 11:45:27.853 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:27.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:28.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:45:29
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['vms', 'volumes', 'images', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.control']
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:45:29 compute-0 ceph-mon[74676]: pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:45:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:45:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:29.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:29.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:45:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:45:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:45:30 compute-0 nova_compute[251290]: 2026-02-02 11:45:30.906 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:31 compute-0 ceph-mon[74676]: pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:31.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:31.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:32 compute-0 ceph-mon[74676]: pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:32 compute-0 nova_compute[251290]: 2026-02-02 11:45:32.856 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:33.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:33.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:35 compute-0 ceph-mon[74676]: pgmap v1019: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:35.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:35 compute-0 nova_compute[251290]: 2026-02-02 11:45:35.909 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:35.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2333997823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:45:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:36] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:45:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:36] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.040 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.041 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.041 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.067 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.067 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.068 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.068 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.068 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:45:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:37.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:37 compute-0 ceph-mon[74676]: pgmap v1020: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:37 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3423326519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:45:37 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3505916932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.580 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:45:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:45:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:37.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.768 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.770 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4571MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.770 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.770 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.839 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.840 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.860 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing inventories for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.863 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.922 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating ProviderTree inventory for provider 92919e7b-7846-4645-9401-9fd55bbbf435 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.923 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:45:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:37.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.945 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing aggregate associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.976 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing trait associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 11:45:37 compute-0 nova_compute[251290]: 2026-02-02 11:45:37.994 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:45:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:45:38 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1829843484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:45:38 compute-0 nova_compute[251290]: 2026-02-02 11:45:38.495 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:45:38 compute-0 nova_compute[251290]: 2026-02-02 11:45:38.500 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:45:38 compute-0 nova_compute[251290]: 2026-02-02 11:45:38.527 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:45:38 compute-0 nova_compute[251290]: 2026-02-02 11:45:38.530 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:45:38 compute-0 nova_compute[251290]: 2026-02-02 11:45:38.530 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:45:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3711337504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:45:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/690332240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:45:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1829843484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:45:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:38.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:39 compute-0 ceph-mon[74676]: pgmap v1021: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:39.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:39.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:40 compute-0 nova_compute[251290]: 2026-02-02 11:45:40.509 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:45:40 compute-0 nova_compute[251290]: 2026-02-02 11:45:40.910 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:41 compute-0 nova_compute[251290]: 2026-02-02 11:45:41.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:45:41 compute-0 nova_compute[251290]: 2026-02-02 11:45:41.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:45:41 compute-0 ceph-mon[74676]: pgmap v1022: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:41.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:41 compute-0 sshd-session[272777]: Received disconnect from 45.148.10.157 port 49274:11:  [preauth]
Feb 02 11:45:41 compute-0 sshd-session[272777]: Disconnected from authenticating user root 45.148.10.157 port 49274 [preauth]
Feb 02 11:45:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:41.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:42 compute-0 nova_compute[251290]: 2026-02-02 11:45:42.866 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:43 compute-0 nova_compute[251290]: 2026-02-02 11:45:43.014 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:45:43 compute-0 nova_compute[251290]: 2026-02-02 11:45:43.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:45:43 compute-0 nova_compute[251290]: 2026-02-02 11:45:43.018 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:45:43 compute-0 ceph-mon[74676]: pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:45:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:43.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:45:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:43.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3550493468' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:45:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3550493468' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:45:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:45:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:45:45 compute-0 nova_compute[251290]: 2026-02-02 11:45:45.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:45:45 compute-0 nova_compute[251290]: 2026-02-02 11:45:45.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:45:45 compute-0 nova_compute[251290]: 2026-02-02 11:45:45.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:45:45 compute-0 nova_compute[251290]: 2026-02-02 11:45:45.038 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:45:45 compute-0 ceph-mon[74676]: pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:45:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:45.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:45 compute-0 nova_compute[251290]: 2026-02-02 11:45:45.912 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:45.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:46] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:45:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:46] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:45:47 compute-0 nova_compute[251290]: 2026-02-02 11:45:47.031 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:45:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:47.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:47 compute-0 sudo[272786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:45:47 compute-0 sudo[272786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:45:47 compute-0 sudo[272786]: pam_unix(sudo:session): session closed for user root
Feb 02 11:45:47 compute-0 ceph-mon[74676]: pgmap v1025: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:47.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:47 compute-0 nova_compute[251290]: 2026-02-02 11:45:47.871 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:47.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:48.937Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:45:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:48.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:49 compute-0 ceph-mon[74676]: pgmap v1026: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:49.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:49.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:50 compute-0 ceph-mon[74676]: pgmap v1027: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:50 compute-0 nova_compute[251290]: 2026-02-02 11:45:50.914 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:51.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:51.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:52 compute-0 nova_compute[251290]: 2026-02-02 11:45:52.874 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:53 compute-0 ceph-mon[74676]: pgmap v1028: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:53.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:55 compute-0 podman[272819]: 2026-02-02 11:45:55.262507065 +0000 UTC m=+0.051363743 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 02 11:45:55 compute-0 podman[272820]: 2026-02-02 11:45:55.296606983 +0000 UTC m=+0.085643426 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 02 11:45:55 compute-0 ceph-mon[74676]: pgmap v1029: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:55 compute-0 nova_compute[251290]: 2026-02-02 11:45:55.916 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:55.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:45:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:45:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:45:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:45:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:45:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:45:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:56] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:45:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:45:56] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:45:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:57.195Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:45:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:57.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:45:57 compute-0 ceph-mon[74676]: pgmap v1030: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:45:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:57.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:57 compute-0 nova_compute[251290]: 2026-02-02 11:45:57.878 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:45:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:57.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:45:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:45:58.938Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:45:59 compute-0 sudo[272869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:45:59 compute-0 sudo[272869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:45:59 compute-0 sudo[272869]: pam_unix(sudo:session): session closed for user root
Feb 02 11:45:59 compute-0 sudo[272894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:45:59 compute-0 sudo[272894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:45:59 compute-0 ceph-mon[74676]: pgmap v1031: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:45:59 compute-0 sudo[272894]: pam_unix(sudo:session): session closed for user root
Feb 02 11:45:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:45:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:45:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:45:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:45:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:45:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:45:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:45:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:45:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:45:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:45:59.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:45:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:45:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:45:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:45:59.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:46:00 compute-0 nova_compute[251290]: 2026-02-02 11:46:00.918 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:46:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:46:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:01 compute-0 ceph-mon[74676]: pgmap v1032: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:01 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:01 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb 02 11:46:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:46:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:46:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:46:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:46:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:46:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:01.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:46:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:46:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:46:01 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:46:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:46:01 compute-0 sudo[272953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:46:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:01 compute-0 sudo[272953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:46:01 compute-0 sudo[272953]: pam_unix(sudo:session): session closed for user root
Feb 02 11:46:01 compute-0 sudo[272978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:46:01 compute-0 sudo[272978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:46:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:01.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:02 compute-0 podman[273043]: 2026-02-02 11:46:02.29645823 +0000 UTC m=+0.046404581 container create bcbe21e7d3a1358efdf0976a51dcc7bd41e0864630667e67e62b514a33edf1de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:46:02 compute-0 systemd[1]: Started libpod-conmon-bcbe21e7d3a1358efdf0976a51dcc7bd41e0864630667e67e62b514a33edf1de.scope.
Feb 02 11:46:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:46:02 compute-0 podman[273043]: 2026-02-02 11:46:02.276592481 +0000 UTC m=+0.026538852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:46:02 compute-0 podman[273043]: 2026-02-02 11:46:02.380266382 +0000 UTC m=+0.130212763 container init bcbe21e7d3a1358efdf0976a51dcc7bd41e0864630667e67e62b514a33edf1de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ishizaka, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:46:02 compute-0 podman[273043]: 2026-02-02 11:46:02.387899561 +0000 UTC m=+0.137845902 container start bcbe21e7d3a1358efdf0976a51dcc7bd41e0864630667e67e62b514a33edf1de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ishizaka, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:46:02 compute-0 podman[273043]: 2026-02-02 11:46:02.391186535 +0000 UTC m=+0.141132916 container attach bcbe21e7d3a1358efdf0976a51dcc7bd41e0864630667e67e62b514a33edf1de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb 02 11:46:02 compute-0 flamboyant_ishizaka[273059]: 167 167
Feb 02 11:46:02 compute-0 systemd[1]: libpod-bcbe21e7d3a1358efdf0976a51dcc7bd41e0864630667e67e62b514a33edf1de.scope: Deactivated successfully.
Feb 02 11:46:02 compute-0 podman[273043]: 2026-02-02 11:46:02.395492789 +0000 UTC m=+0.145439160 container died bcbe21e7d3a1358efdf0976a51dcc7bd41e0864630667e67e62b514a33edf1de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-23d763e27db9deccb6da895ac59a82772f1776c6cac6ad68d1de10e8b0f662c1-merged.mount: Deactivated successfully.
Feb 02 11:46:02 compute-0 podman[273043]: 2026-02-02 11:46:02.442111165 +0000 UTC m=+0.192057516 container remove bcbe21e7d3a1358efdf0976a51dcc7bd41e0864630667e67e62b514a33edf1de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ishizaka, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:46:02 compute-0 systemd[1]: libpod-conmon-bcbe21e7d3a1358efdf0976a51dcc7bd41e0864630667e67e62b514a33edf1de.scope: Deactivated successfully.
Feb 02 11:46:02 compute-0 podman[273081]: 2026-02-02 11:46:02.587038779 +0000 UTC m=+0.043377004 container create 3bbefabff529c1b9d77c7efff90b027e006dcea26c0ab8fb9b25e54c3d4afbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_keldysh, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:46:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:46:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:46:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:46:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:46:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:46:02 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:46:02 compute-0 systemd[1]: Started libpod-conmon-3bbefabff529c1b9d77c7efff90b027e006dcea26c0ab8fb9b25e54c3d4afbb5.scope.
Feb 02 11:46:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93833cb7abda632979f73675363c9721516b28e4460b0ee2a2dd089b62d9e18d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93833cb7abda632979f73675363c9721516b28e4460b0ee2a2dd089b62d9e18d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93833cb7abda632979f73675363c9721516b28e4460b0ee2a2dd089b62d9e18d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93833cb7abda632979f73675363c9721516b28e4460b0ee2a2dd089b62d9e18d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93833cb7abda632979f73675363c9721516b28e4460b0ee2a2dd089b62d9e18d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:02 compute-0 podman[273081]: 2026-02-02 11:46:02.567725486 +0000 UTC m=+0.024063751 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:46:02 compute-0 podman[273081]: 2026-02-02 11:46:02.683990109 +0000 UTC m=+0.140328354 container init 3bbefabff529c1b9d77c7efff90b027e006dcea26c0ab8fb9b25e54c3d4afbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:46:02 compute-0 podman[273081]: 2026-02-02 11:46:02.691464623 +0000 UTC m=+0.147802848 container start 3bbefabff529c1b9d77c7efff90b027e006dcea26c0ab8fb9b25e54c3d4afbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb 02 11:46:02 compute-0 podman[273081]: 2026-02-02 11:46:02.6958842 +0000 UTC m=+0.152222425 container attach 3bbefabff529c1b9d77c7efff90b027e006dcea26c0ab8fb9b25e54c3d4afbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:46:02 compute-0 nova_compute[251290]: 2026-02-02 11:46:02.883 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:03 compute-0 beautiful_keldysh[273097]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:46:03 compute-0 beautiful_keldysh[273097]: --> All data devices are unavailable
Feb 02 11:46:03 compute-0 systemd[1]: libpod-3bbefabff529c1b9d77c7efff90b027e006dcea26c0ab8fb9b25e54c3d4afbb5.scope: Deactivated successfully.
Feb 02 11:46:03 compute-0 podman[273081]: 2026-02-02 11:46:03.073121783 +0000 UTC m=+0.529459998 container died 3bbefabff529c1b9d77c7efff90b027e006dcea26c0ab8fb9b25e54c3d4afbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_keldysh, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-93833cb7abda632979f73675363c9721516b28e4460b0ee2a2dd089b62d9e18d-merged.mount: Deactivated successfully.
Feb 02 11:46:03 compute-0 podman[273081]: 2026-02-02 11:46:03.12567647 +0000 UTC m=+0.582014705 container remove 3bbefabff529c1b9d77c7efff90b027e006dcea26c0ab8fb9b25e54c3d4afbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:46:03 compute-0 systemd[1]: libpod-conmon-3bbefabff529c1b9d77c7efff90b027e006dcea26c0ab8fb9b25e54c3d4afbb5.scope: Deactivated successfully.
Feb 02 11:46:03 compute-0 sudo[272978]: pam_unix(sudo:session): session closed for user root
Feb 02 11:46:03 compute-0 sudo[273124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:46:03 compute-0 sudo[273124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:46:03 compute-0 sudo[273124]: pam_unix(sudo:session): session closed for user root
Feb 02 11:46:03 compute-0 sudo[273149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:46:03 compute-0 sudo[273149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:46:03 compute-0 ceph-mon[74676]: pgmap v1033: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:46:03 compute-0 podman[273217]: 2026-02-02 11:46:03.683729516 +0000 UTC m=+0.046116833 container create 07eabb179b04ec23686fca589cc3382854a387be16e394787fc26ff3f6af099c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_shamir, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb 02 11:46:03 compute-0 systemd[1]: Started libpod-conmon-07eabb179b04ec23686fca589cc3382854a387be16e394787fc26ff3f6af099c.scope.
Feb 02 11:46:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:46:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:46:03 compute-0 podman[273217]: 2026-02-02 11:46:03.66398434 +0000 UTC m=+0.026371687 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:46:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:46:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:03.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:46:03 compute-0 podman[273217]: 2026-02-02 11:46:03.767500887 +0000 UTC m=+0.129888204 container init 07eabb179b04ec23686fca589cc3382854a387be16e394787fc26ff3f6af099c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_shamir, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:46:03 compute-0 podman[273217]: 2026-02-02 11:46:03.77735206 +0000 UTC m=+0.139739377 container start 07eabb179b04ec23686fca589cc3382854a387be16e394787fc26ff3f6af099c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_shamir, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:46:03 compute-0 podman[273217]: 2026-02-02 11:46:03.781722065 +0000 UTC m=+0.144109412 container attach 07eabb179b04ec23686fca589cc3382854a387be16e394787fc26ff3f6af099c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:46:03 compute-0 nostalgic_shamir[273233]: 167 167
Feb 02 11:46:03 compute-0 systemd[1]: libpod-07eabb179b04ec23686fca589cc3382854a387be16e394787fc26ff3f6af099c.scope: Deactivated successfully.
Feb 02 11:46:03 compute-0 podman[273217]: 2026-02-02 11:46:03.783904658 +0000 UTC m=+0.146292015 container died 07eabb179b04ec23686fca589cc3382854a387be16e394787fc26ff3f6af099c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_shamir, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb 02 11:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6f31d6de228013461d2f8df81e24e479cc9329b68d97962abab6bf1def5b1aa-merged.mount: Deactivated successfully.
Feb 02 11:46:03 compute-0 podman[273217]: 2026-02-02 11:46:03.830914105 +0000 UTC m=+0.193301422 container remove 07eabb179b04ec23686fca589cc3382854a387be16e394787fc26ff3f6af099c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:46:03 compute-0 systemd[1]: libpod-conmon-07eabb179b04ec23686fca589cc3382854a387be16e394787fc26ff3f6af099c.scope: Deactivated successfully.
Feb 02 11:46:03 compute-0 podman[273257]: 2026-02-02 11:46:03.964577167 +0000 UTC m=+0.044668012 container create 6c8266de3445ad1822bc5dc1f1911ecca850f9afe2bc92ba57541eba51dc944d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 11:46:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:03.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:04 compute-0 systemd[1]: Started libpod-conmon-6c8266de3445ad1822bc5dc1f1911ecca850f9afe2bc92ba57541eba51dc944d.scope.
Feb 02 11:46:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80524971975b355a9691cadc1470cf3183bc546b4d5a27bbd859f96e4e86898c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80524971975b355a9691cadc1470cf3183bc546b4d5a27bbd859f96e4e86898c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80524971975b355a9691cadc1470cf3183bc546b4d5a27bbd859f96e4e86898c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80524971975b355a9691cadc1470cf3183bc546b4d5a27bbd859f96e4e86898c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:04 compute-0 podman[273257]: 2026-02-02 11:46:03.946772516 +0000 UTC m=+0.026863171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:46:04 compute-0 podman[273257]: 2026-02-02 11:46:04.055208615 +0000 UTC m=+0.135299270 container init 6c8266de3445ad1822bc5dc1f1911ecca850f9afe2bc92ba57541eba51dc944d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:46:04 compute-0 podman[273257]: 2026-02-02 11:46:04.062483813 +0000 UTC m=+0.142574468 container start 6c8266de3445ad1822bc5dc1f1911ecca850f9afe2bc92ba57541eba51dc944d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:46:04 compute-0 podman[273257]: 2026-02-02 11:46:04.066936551 +0000 UTC m=+0.147027196 container attach 6c8266de3445ad1822bc5dc1f1911ecca850f9afe2bc92ba57541eba51dc944d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]: {
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:     "1": [
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:         {
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "devices": [
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "/dev/loop3"
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             ],
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "lv_name": "ceph_lv0",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "lv_size": "21470642176",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "name": "ceph_lv0",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "tags": {
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.cluster_name": "ceph",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.crush_device_class": "",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.encrypted": "0",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.osd_id": "1",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.type": "block",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.vdo": "0",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:                 "ceph.with_tpm": "0"
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             },
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "type": "block",
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:             "vg_name": "ceph_vg0"
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:         }
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]:     ]
Feb 02 11:46:04 compute-0 relaxed_liskov[273274]: }
Feb 02 11:46:04 compute-0 systemd[1]: libpod-6c8266de3445ad1822bc5dc1f1911ecca850f9afe2bc92ba57541eba51dc944d.scope: Deactivated successfully.
Feb 02 11:46:04 compute-0 podman[273257]: 2026-02-02 11:46:04.388852718 +0000 UTC m=+0.468943363 container died 6c8266de3445ad1822bc5dc1f1911ecca850f9afe2bc92ba57541eba51dc944d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:46:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-80524971975b355a9691cadc1470cf3183bc546b4d5a27bbd859f96e4e86898c-merged.mount: Deactivated successfully.
Feb 02 11:46:04 compute-0 podman[273257]: 2026-02-02 11:46:04.427862056 +0000 UTC m=+0.507952691 container remove 6c8266de3445ad1822bc5dc1f1911ecca850f9afe2bc92ba57541eba51dc944d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:46:04 compute-0 systemd[1]: libpod-conmon-6c8266de3445ad1822bc5dc1f1911ecca850f9afe2bc92ba57541eba51dc944d.scope: Deactivated successfully.
Feb 02 11:46:04 compute-0 sudo[273149]: pam_unix(sudo:session): session closed for user root
Feb 02 11:46:04 compute-0 sudo[273294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:46:04 compute-0 sudo[273294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:46:04 compute-0 sudo[273294]: pam_unix(sudo:session): session closed for user root
Feb 02 11:46:04 compute-0 sudo[273319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:46:04 compute-0 sudo[273319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:46:05 compute-0 podman[273385]: 2026-02-02 11:46:05.002922459 +0000 UTC m=+0.048260444 container create bba3d9cc78108283b7a9c978fdd0e19f327e2c1be3e49cdc2880957def00b997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:46:05 compute-0 systemd[1]: Started libpod-conmon-bba3d9cc78108283b7a9c978fdd0e19f327e2c1be3e49cdc2880957def00b997.scope.
Feb 02 11:46:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:46:05 compute-0 podman[273385]: 2026-02-02 11:46:04.980069964 +0000 UTC m=+0.025407979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:46:05 compute-0 podman[273385]: 2026-02-02 11:46:05.087183735 +0000 UTC m=+0.132521750 container init bba3d9cc78108283b7a9c978fdd0e19f327e2c1be3e49cdc2880957def00b997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_galois, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb 02 11:46:05 compute-0 podman[273385]: 2026-02-02 11:46:05.093089374 +0000 UTC m=+0.138427359 container start bba3d9cc78108283b7a9c978fdd0e19f327e2c1be3e49cdc2880957def00b997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_galois, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:46:05 compute-0 podman[273385]: 2026-02-02 11:46:05.097461579 +0000 UTC m=+0.142799584 container attach bba3d9cc78108283b7a9c978fdd0e19f327e2c1be3e49cdc2880957def00b997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 11:46:05 compute-0 thirsty_galois[273401]: 167 167
Feb 02 11:46:05 compute-0 systemd[1]: libpod-bba3d9cc78108283b7a9c978fdd0e19f327e2c1be3e49cdc2880957def00b997.scope: Deactivated successfully.
Feb 02 11:46:05 compute-0 conmon[273401]: conmon bba3d9cc78108283b7a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bba3d9cc78108283b7a9c978fdd0e19f327e2c1be3e49cdc2880957def00b997.scope/container/memory.events
Feb 02 11:46:05 compute-0 podman[273385]: 2026-02-02 11:46:05.101840475 +0000 UTC m=+0.147178480 container died bba3d9cc78108283b7a9c978fdd0e19f327e2c1be3e49cdc2880957def00b997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_galois, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-246c4465cc047a986c317850ec0b0c2e75aee057684a029e0072ac8966439500-merged.mount: Deactivated successfully.
Feb 02 11:46:05 compute-0 podman[273385]: 2026-02-02 11:46:05.136109107 +0000 UTC m=+0.181447092 container remove bba3d9cc78108283b7a9c978fdd0e19f327e2c1be3e49cdc2880957def00b997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:46:05 compute-0 systemd[1]: libpod-conmon-bba3d9cc78108283b7a9c978fdd0e19f327e2c1be3e49cdc2880957def00b997.scope: Deactivated successfully.
Feb 02 11:46:05 compute-0 podman[273424]: 2026-02-02 11:46:05.297114182 +0000 UTC m=+0.045321850 container create bf39c4dbbb05d07644e880de2ae01c745e8e339209367756c565b45440d018f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:46:05 compute-0 systemd[1]: Started libpod-conmon-bf39c4dbbb05d07644e880de2ae01c745e8e339209367756c565b45440d018f1.scope.
Feb 02 11:46:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07e2c6e33132afa501549623867553c64a689bd40ea747abfab7e5f77efd7c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:05 compute-0 podman[273424]: 2026-02-02 11:46:05.276255524 +0000 UTC m=+0.024463222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07e2c6e33132afa501549623867553c64a689bd40ea747abfab7e5f77efd7c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07e2c6e33132afa501549623867553c64a689bd40ea747abfab7e5f77efd7c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07e2c6e33132afa501549623867553c64a689bd40ea747abfab7e5f77efd7c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:46:05 compute-0 podman[273424]: 2026-02-02 11:46:05.386561046 +0000 UTC m=+0.134768744 container init bf39c4dbbb05d07644e880de2ae01c745e8e339209367756c565b45440d018f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:46:05 compute-0 podman[273424]: 2026-02-02 11:46:05.39402855 +0000 UTC m=+0.142236218 container start bf39c4dbbb05d07644e880de2ae01c745e8e339209367756c565b45440d018f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:46:05 compute-0 podman[273424]: 2026-02-02 11:46:05.398240131 +0000 UTC m=+0.146447809 container attach bf39c4dbbb05d07644e880de2ae01c745e8e339209367756c565b45440d018f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:46:05 compute-0 ceph-mon[74676]: pgmap v1034: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:46:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:46:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:05.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:05 compute-0 nova_compute[251290]: 2026-02-02 11:46:05.919 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:05.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:06 compute-0 lvm[273515]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:46:06 compute-0 lvm[273515]: VG ceph_vg0 finished
Feb 02 11:46:06 compute-0 jovial_nightingale[273440]: {}
Feb 02 11:46:06 compute-0 systemd[1]: libpod-bf39c4dbbb05d07644e880de2ae01c745e8e339209367756c565b45440d018f1.scope: Deactivated successfully.
Feb 02 11:46:06 compute-0 systemd[1]: libpod-bf39c4dbbb05d07644e880de2ae01c745e8e339209367756c565b45440d018f1.scope: Consumed 1.079s CPU time.
Feb 02 11:46:06 compute-0 podman[273424]: 2026-02-02 11:46:06.13115933 +0000 UTC m=+0.879366998 container died bf39c4dbbb05d07644e880de2ae01c745e8e339209367756c565b45440d018f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b07e2c6e33132afa501549623867553c64a689bd40ea747abfab7e5f77efd7c2-merged.mount: Deactivated successfully.
Feb 02 11:46:06 compute-0 podman[273424]: 2026-02-02 11:46:06.182881653 +0000 UTC m=+0.931089321 container remove bf39c4dbbb05d07644e880de2ae01c745e8e339209367756c565b45440d018f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:46:06 compute-0 systemd[1]: libpod-conmon-bf39c4dbbb05d07644e880de2ae01c745e8e339209367756c565b45440d018f1.scope: Deactivated successfully.
Feb 02 11:46:06 compute-0 sudo[273319]: pam_unix(sudo:session): session closed for user root
Feb 02 11:46:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:46:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:46:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:06 compute-0 sudo[273529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:46:06 compute-0 sudo[273529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:46:06 compute-0 sudo[273529]: pam_unix(sudo:session): session closed for user root
Feb 02 11:46:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:46:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:46:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:07.196Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:46:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:07.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:46:07 compute-0 ceph-mon[74676]: pgmap v1035: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:46:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:46:07 compute-0 sudo[273555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:46:07 compute-0 sudo[273555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:46:07 compute-0 sudo[273555]: pam_unix(sudo:session): session closed for user root
Feb 02 11:46:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:46:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:07.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:07 compute-0 nova_compute[251290]: 2026-02-02 11:46:07.889 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:07.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:08.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:46:09 compute-0 ceph-mon[74676]: pgmap v1036: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:46:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:46:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:09.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:09.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:10 compute-0 nova_compute[251290]: 2026-02-02 11:46:10.921 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:11 compute-0 ceph-mon[74676]: pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:46:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:46:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:11.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:46:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:11.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:46:12 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=infra.usagestats t=2026-02-02T11:46:12.012538847Z level=info msg="Usage stats are ready to report"
Feb 02 11:46:12 compute-0 nova_compute[251290]: 2026-02-02 11:46:12.893 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:13 compute-0 ceph-mon[74676]: pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:46:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:13.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:13.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:14 compute-0 sshd-session[273585]: Invalid user lighthouse from 80.94.92.186 port 37450
Feb 02 11:46:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:46:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:46:14 compute-0 sshd-session[273585]: Connection closed by invalid user lighthouse 80.94.92.186 port 37450 [preauth]
Feb 02 11:46:15 compute-0 ceph-mon[74676]: pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:46:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:15.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:15 compute-0 nova_compute[251290]: 2026-02-02 11:46:15.923 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:15.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:46:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:46:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:17.197Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:46:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:17.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:46:17 compute-0 ceph-mon[74676]: pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:17.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:17 compute-0 nova_compute[251290]: 2026-02-02 11:46:17.897 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:17.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:18.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:46:19 compute-0 ceph-mon[74676]: pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:19 compute-0 sshd-session[273594]: Accepted publickey for zuul from 192.168.122.10 port 32804 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:46:19 compute-0 systemd-logind[793]: New session 56 of user zuul.
Feb 02 11:46:19 compute-0 systemd[1]: Started Session 56 of User zuul.
Feb 02 11:46:19 compute-0 sshd-session[273594]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:46:19 compute-0 sudo[273599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Feb 02 11:46:19 compute-0 sudo[273599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:46:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:19.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:19.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:20 compute-0 nova_compute[251290]: 2026-02-02 11:46:20.925 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:21 compute-0 ceph-mon[74676]: pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:21.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:21 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26351 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:21.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:22 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16530 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:22 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.25996 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:22 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26366 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:22 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16539 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:22 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26002 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:46:22.685 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:46:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:46:22.685 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:46:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:46:22.685 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:46:22 compute-0 nova_compute[251290]: 2026-02-02 11:46:22.936 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:23 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Feb 02 11:46:23 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577655756' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:46:23 compute-0 ceph-mon[74676]: pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:23 compute-0 ceph-mon[74676]: from='client.26351 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:23 compute-0 ceph-mon[74676]: from='client.16530 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:23 compute-0 ceph-mon[74676]: from='client.25996 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:23 compute-0 ceph-mon[74676]: from='client.26366 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:23 compute-0 ceph-mon[74676]: from='client.16539 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:23 compute-0 ceph-mon[74676]: from='client.26002 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:23 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1891110599' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:46:23 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1577655756' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:46:23 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3280997707' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:46:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:23.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:23.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:25 compute-0 ceph-mon[74676]: pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:25.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:25 compute-0 nova_compute[251290]: 2026-02-02 11:46:25.927 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:26.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:26 compute-0 podman[273921]: 2026-02-02 11:46:26.279055116 +0000 UTC m=+0.062600746 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 02 11:46:26 compute-0 podman[273922]: 2026-02-02 11:46:26.334360452 +0000 UTC m=+0.117828480 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 02 11:46:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:26] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:46:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:26] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:46:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:27.198Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:46:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:27.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:46:27 compute-0 ceph-mon[74676]: pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:27 compute-0 sudo[273976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:46:27 compute-0 sudo[273976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:46:27 compute-0 sudo[273976]: pam_unix(sudo:session): session closed for user root
Feb 02 11:46:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:27.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:27 compute-0 nova_compute[251290]: 2026-02-02 11:46:27.940 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:28.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:28 compute-0 ovs-vsctl[274027]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 02 11:46:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:28.940Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:46:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:28.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:46:29
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'backups', 'default.rgw.log', '.nfs', '.rgw.root', 'default.rgw.meta', 'volumes', 'vms']
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:46:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:46:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:29 compute-0 ceph-mon[74676]: pgmap v1046: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:29.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:29 compute-0 virtqemud[251949]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Feb 02 11:46:29 compute-0 virtqemud[251949]: hostname: compute-0
Feb 02 11:46:29 compute-0 virtqemud[251949]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 02 11:46:29 compute-0 virtqemud[251949]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 02 11:46:29 compute-0 virtqemud[251949]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:29 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:46:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26393 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26017 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: cache status {prefix=cache status} (starting...)
Feb 02 11:46:30 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:30 compute-0 lvm[274344]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:46:30 compute-0 lvm[274344]: VG ceph_vg0 finished
Feb 02 11:46:30 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: client ls {prefix=client ls} (starting...)
Feb 02 11:46:30 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb 02 11:46:30 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb 02 11:46:30 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mon[74676]: pgmap v1047: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:30 compute-0 ceph-mon[74676]: from='client.26393 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mon[74676]: from='client.26017 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2720579560' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mon[74676]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3427870786' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mon[74676]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26408 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:30 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26032 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:30 compute-0 nova_compute[251290]: 2026-02-02 11:46:30.929 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:31 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26426 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26044 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16566 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: damage ls {prefix=damage ls} (starting...)
Feb 02 11:46:31 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:31 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump loads {prefix=dump loads} (starting...)
Feb 02 11:46:31 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb 02 11:46:31 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512744456' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26444 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 02 11:46:31 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:31 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26059 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:31.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:31 compute-0 ceph-mon[74676]: from='client.26408 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mon[74676]: from='client.26032 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/687865387' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2708429672' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mon[74676]: from='client.26426 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mon[74676]: from='client.26044 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mon[74676]: from='client.16566 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2512744456' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2824439092' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1462716929' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:31 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16581 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:31 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 02 11:46:31 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:32.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:32 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 02 11:46:32 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:46:32 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2622959086' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 02 11:46:32 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:32 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16599 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 02 11:46:32 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Feb 02 11:46:32 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1237807645' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26089 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 02 11:46:32 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:32 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26471 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.26444 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: pgmap v1048: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.26059 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.16581 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2622959086' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3806051358' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1632151488' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3614889718' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3733905990' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.16599 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1237807645' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.26089 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/894778968' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:46:32 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16611 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:32 compute-0 nova_compute[251290]: 2026-02-02 11:46:32.943 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:33 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: ops {prefix=ops} (starting...)
Feb 02 11:46:33 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:33 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26104 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26483 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 02 11:46:33 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/51790729' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 02 11:46:33 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856890390' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16632 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb 02 11:46:33 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb 02 11:46:33 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:33 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: session ls {prefix=session ls} (starting...)
Feb 02 11:46:33 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:46:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:33.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.26471 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1789212210' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.16611 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.26104 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.26483 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/51790729' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2285635603' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1841752176' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2856890390' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1955631862' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.16632 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3116755987' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mon[74676]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:46:33 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: status {prefix=status} (starting...)
Feb 02 11:46:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 02 11:46:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/455019358' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16647 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 02 11:46:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368667849' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26152 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:46:34.651+0000 7f3d02436640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:46:34 compute-0 ceph-mgr[74969]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:46:34 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26155 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:34 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:46:34.655+0000 7f3d02436640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:46:34 compute-0 ceph-mgr[74969]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:46:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb 02 11:46:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2280959682' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: pgmap v1049: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1363343843' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/646336704' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/455019358' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.16647 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1171214176' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2787414927' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3864406166' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/470491869' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1368667849' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.26152 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.26155 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2280959682' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:46:34 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1378269078' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 02 11:46:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320624399' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 02 11:46:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369127918' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16695 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mgr[74969]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:46:35 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:46:35.672+0000 7f3d02436640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:46:35 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26600 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 11:46:35 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3670204497' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:35.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/116813340' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3320624399' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/423239948' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2010303421' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3369127918' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3783993953' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/85484188' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3670204497' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1265647652' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2556543790' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Feb 02 11:46:35 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26209 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:35 compute-0 nova_compute[251290]: 2026-02-02 11:46:35.930 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:46:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:36.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:46:36 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26609 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 02 11:46:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497315578' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 02 11:46:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713461675' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26233 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26624 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 02 11:46:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201330270' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 02 11:46:36 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557509200' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:36 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26251 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.16695 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.26600 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: pgmap v1050: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.26209 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.26609 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1497315578' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3052779910' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2112372452' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/713461675' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.26233 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.26624 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3136514452' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1201330270' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/557509200' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Feb 02 11:46:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:36] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:46:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:36] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:46:37 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26636 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:37.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:46:37 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16740 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 02 11:46:37 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764369264' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:46:37 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26257 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:37 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26657 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:37 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16761 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:37.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:37 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26278 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 02 11:46:37 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3848892591' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:46:37 compute-0 nova_compute[251290]: 2026-02-02 11:46:37.947 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:37 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26684 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:46:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:38.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:46:38 compute-0 nova_compute[251290]: 2026-02-02 11:46:38.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:46:38 compute-0 nova_compute[251290]: 2026-02-02 11:46:38.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.26251 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2669161933' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.26636 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1091954384' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.16740 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1764369264' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.26257 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3897336221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.26657 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2396450575' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1788909220' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3848892591' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003037 2 0.000048
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000057 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003084 2 0.000042
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=0/0 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 02 11:46:38 compute-0 ceph-osd[83123]: merge_log_dups log.dups.size()=0olog.dups.size()=24
Feb 02 11:46:38 compute-0 ceph-osd[83123]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=24
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=111/112 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001409 2 0.000174
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=111/112 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=111/112 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 113 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=111/112 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:05.725507+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 196608 heap: 72228864 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x12a3ee/0x1bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 113 handle_osd_map epochs [113,114], i have 114, src has [1,114]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=111/112 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004810 2 0.000093
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=111/112 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009458 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=111/112 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006612 3 0.000170
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.009792 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=111/74 les/c/f=112/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/74 les/c/f=114/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004508 4 0.000394
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/74 les/c/f=114/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/74 les/c/f=114/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000026 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.15( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/74 les/c/f=114/75/0 sis=113) [1] r=0 lpr=113 pi=[74,113)/1 crt=43'1029 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.007095 5 0.000460
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000164 1 0.000214
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000581 1 0.000039
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.062805 2 0.000244
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 114 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:06.725697+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 327680 heap: 72228864 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.932073 1 0.000148
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003095 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary 2.012917 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started 2.012951 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[76,113)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115 pruub=15.003785133s) [2] async=[2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 43'1029 active pruub 270.924621582s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115 pruub=15.003676414s) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 270.924621582s@ mbc={}] exit Reset 0.000175 1 0.000243
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115 pruub=15.003676414s) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 270.924621582s@ mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115 pruub=15.003676414s) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 270.924621582s@ mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115 pruub=15.003676414s) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 270.924621582s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115 pruub=15.003676414s) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 270.924621582s@ mbc={}] exit Start 0.000008 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 115 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115 pruub=15.003676414s) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 270.924621582s@ mbc={}] enter Started/Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:07.725865+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 327680 heap: 72228864 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886137 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:08.726039+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 278528 heap: 72228864 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba95547800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 116 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.515622 6 0.000150
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 116 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 116 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 116 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001014 2 0.000120
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 116 pg[9.16( v 43'1029 (0'0,43'1029] local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 116 pg[9.16( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115) [2] r=-1 lpr=115 DELETING pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.076992 2 0.000352
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 116 pg[9.16( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.078099 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 116 pg[9.16( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=113/114 n=4 ec=55/37 lis/c=113/76 les/c/f=114/77/0 sis=115) [2] r=-1 lpr=115 pi=[76,115)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.593796 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:09.726245+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 253952 heap: 72228864 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:10.726602+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 253952 heap: 72228864 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:11.726843+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 237568 heap: 72228864 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fca4f000/0x0/0x4ffc00000, data 0x13454a/0x1cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.000154495s of 10.169177055s, submitted: 50
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fca4c000/0x0/0x4ffc00000, data 0x136636/0x1cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:12.727026+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 229376 heap: 72228864 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 882757 data_alloc: 218103808 data_used: 172032
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:13.727208+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 221184 heap: 72228864 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:14.727354+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 221184 heap: 72228864 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 117 handle_osd_map epochs [118,119], i have 117, src has [1,119]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:15.727532+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1253376 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=89) [1] r=0 lpr=89 crt=43'1029 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 44.557577 91 0.000523
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=89) [1] r=0 lpr=89 crt=43'1029 mlcod 0'0 active mbc={}] exit Started/Primary/Active 44.565770 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=89) [1] r=0 lpr=89 crt=43'1029 mlcod 0'0 active mbc={}] exit Started/Primary 45.572298 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=89) [1] r=0 lpr=89 crt=43'1029 mlcod 0'0 active mbc={}] exit Started 45.572354 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=89) [1] r=0 lpr=89 crt=43'1029 mlcod 0'0 active mbc={}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120 pruub=11.442985535s) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 active pruub 275.912414551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120 pruub=11.442924500s) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 275.912414551s@ mbc={}] exit Reset 0.000121 1 0.000222
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120 pruub=11.442924500s) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 275.912414551s@ mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120 pruub=11.442924500s) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 275.912414551s@ mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120 pruub=11.442924500s) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 275.912414551s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120 pruub=11.442924500s) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 275.912414551s@ mbc={}] exit Start 0.000008 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 120 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120 pruub=11.442924500s) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 275.912414551s@ mbc={}] enter Started/Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:16.727676+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.624749 3 0.000054
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.624865 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=120) [0] r=-1 lpr=120 pi=[89,120)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] exit Reset 0.000070 1 0.000240
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004629 2 0.000047
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 121 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1236992 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:17.727874+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 121 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012225 3 0.000133
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.017020 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=89/90 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=89/89 les/c/f=90/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.013920 5 0.000665
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000139 1 0.000129
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000834 1 0.000071
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031295 2 0.000083
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 122 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1245184 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897283 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca3d000/0x0/0x4ffc00000, data 0x1407b7/0x1de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:18.728040+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 122 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.997585 1 0.000286
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active 1.044309 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary 2.061354 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started 2.061387 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[89,121)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123 pruub=14.969496727s) [0] async=[0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 43'1029 active pruub 282.125457764s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123 pruub=14.968793869s) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 282.125457764s@ mbc={}] exit Reset 0.000778 1 0.000876
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123 pruub=14.968793869s) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 282.125457764s@ mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123 pruub=14.968793869s) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 282.125457764s@ mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123 pruub=14.968793869s) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 282.125457764s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123 pruub=14.968793869s) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 282.125457764s@ mbc={}] exit Start 0.000109 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 123 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123 pruub=14.968793869s) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 282.125457764s@ mbc={}] enter Started/Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1212416 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:19.728218+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 124 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030573 7 0.000282
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 124 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 124 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 124 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000154 1 0.000100
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 124 pg[9.1a( v 43'1029 (0'0,43'1029] local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1179648 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 124 pg[9.1a( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123) [0] r=-1 lpr=123 DELETING pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.033417 2 0.000386
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 124 pg[9.1a( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.033653 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 124 pg[9.1a( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=121/122 n=4 ec=55/37 lis/c=121/89 les/c/f=122/90/0 sis=123) [0] r=-1 lpr=123 pi=[89,123)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.064430 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:20.728431+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1138688 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:21.728679+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 1130496 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:22.728859+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1122304 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897480 data_alloc: 218103808 data_used: 172032
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.752078056s of 10.860388756s, submitted: 32
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fca35000/0x0/0x4ffc00000, data 0x14678e/0x1e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:23.729000+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1105920 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:24.729151+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 1097728 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:25.729393+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 1081344 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:26.729557+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 1081344 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=94) [1] r=0 lpr=94 crt=43'1029 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 49.086207 104 0.000655
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=94) [1] r=0 lpr=94 crt=43'1029 mlcod 0'0 active mbc={}] exit Started/Primary/Active 49.093832 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=94) [1] r=0 lpr=94 crt=43'1029 mlcod 0'0 active mbc={}] exit Started/Primary 50.189438 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=94) [1] r=0 lpr=94 crt=43'1029 mlcod 0'0 active mbc={}] exit Started 50.189500 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=94) [1] r=0 lpr=94 crt=43'1029 mlcod 0'0 active mbc={}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129 pruub=14.914955139s) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 active pruub 290.587219238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129 pruub=14.914890289s) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 290.587219238s@ mbc={}] exit Reset 0.000125 1 0.000209
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129 pruub=14.914890289s) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 290.587219238s@ mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129 pruub=14.914890289s) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 290.587219238s@ mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129 pruub=14.914890289s) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 290.587219238s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129 pruub=14.914890289s) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 290.587219238s@ mbc={}] exit Start 0.000008 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 129 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129 pruub=14.914890289s) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 290.587219238s@ mbc={}] enter Started/Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:27.729796+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1073152 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909080 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.683116 3 0.000054
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.683174 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=129) [2] r=-1 lpr=129 pi=[94,129)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] exit Reset 0.000118 1 0.000166
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] exit Start 0.000010 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002170 2 0.000058
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000049 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 130 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:28.729999+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1064960 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fca25000/0x0/0x4ffc00000, data 0x1508ee/0x1f5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 130 handle_osd_map epochs [130,131], i have 131, src has [1,131]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.025074 3 0.000132
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.027393 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=94/95 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=94/94 les/c/f=95/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.011986 5 0.000501
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000261 1 0.000144
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000768 1 0.000149
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045007 2 0.000090
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 131 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:29.730143+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 999424 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.961637 1 0.000152
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active 1.020170 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary 2.047585 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started 2.047625 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[94,130)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132 pruub=14.991101265s) [2] async=[2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 43'1029 active pruub 293.394470215s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132 pruub=14.991000175s) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 293.394470215s@ mbc={}] exit Reset 0.000182 1 0.000367
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132 pruub=14.991000175s) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 293.394470215s@ mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132 pruub=14.991000175s) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 293.394470215s@ mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132 pruub=14.991000175s) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 293.394470215s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132 pruub=14.991000175s) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 293.394470215s@ mbc={}] exit Start 0.000011 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 132 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132 pruub=14.991000175s) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 293.394470215s@ mbc={}] enter Started/Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:30.730269+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 991232 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 133 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.079312 7 0.000132
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 133 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 133 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 133 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000102 1 0.000088
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 133 pg[9.1d( v 43'1029 (0'0,43'1029] local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 133 pg[9.1d( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132) [2] r=-1 lpr=132 DELETING pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053141 2 0.000632
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 133 pg[9.1d( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053293 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 133 pg[9.1d( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=130/131 n=5 ec=55/37 lis/c=130/94 les/c/f=131/95/0 sis=132) [2] r=-1 lpr=132 pi=[94,132)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.132675 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:31.730412+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 958464 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: mgrc handle_mgr_map Got map version 32
Feb 02 11:46:38 compute-0 ceph-osd[83123]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3082357126,v1:192.168.122.100:6801/3082357126]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:32.730570+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 770048 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912264 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:33.730698+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 770048 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:34.730871+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x156726/0x1fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 761856 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:35.731045+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 761856 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:36.731169+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 761856 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x156726/0x1fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:37.731333+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 753664 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912264 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.456613541s of 15.604823112s, submitted: 67
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [1] r=0 lpr=76 crt=43'1029 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 88.439427 177 0.001453
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [1] r=0 lpr=76 crt=43'1029 mlcod 0'0 active mbc={}] exit Started/Primary/Active 88.444141 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [1] r=0 lpr=76 crt=43'1029 mlcod 0'0 active mbc={}] exit Started/Primary 89.451119 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [1] r=0 lpr=76 crt=43'1029 mlcod 0'0 active mbc={}] exit Started 89.451168 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [1] r=0 lpr=76 crt=43'1029 mlcod 0'0 active mbc={}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134 pruub=15.562667847s) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 active pruub 302.390136719s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134 pruub=15.562610626s) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 302.390136719s@ mbc={}] exit Reset 0.000129 1 0.000245
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134 pruub=15.562610626s) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 302.390136719s@ mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134 pruub=15.562610626s) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 302.390136719s@ mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134 pruub=15.562610626s) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 302.390136719s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134 pruub=15.562610626s) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 302.390136719s@ mbc={}] exit Start 0.000011 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 134 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134 pruub=15.562610626s) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 302.390136719s@ mbc={}] enter Started/Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:38.731823+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 753664 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015872 3 0.000063
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.015938 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=134) [0] r=-1 lpr=134 pi=[76,134)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] exit Reset 0.000109 1 0.000162
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] exit Start 0.000009 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002924 2 0.000063
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 135 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:39.732076+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 745472 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fca1c000/0x0/0x4ffc00000, data 0x158812/0x1ff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=99) [1] r=0 lpr=99 crt=43'1029 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 54.936288 110 0.000829
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=99) [1] r=0 lpr=99 crt=43'1029 mlcod 0'0 active mbc={}] exit Started/Primary/Active 54.947314 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=99) [1] r=0 lpr=99 crt=43'1029 mlcod 0'0 active mbc={}] exit Started/Primary 55.148263 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=99) [1] r=0 lpr=99 crt=43'1029 mlcod 0'0 active mbc={}] exit Started 55.148398 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005581 3 0.000143
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.008684 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=99) [1] r=0 lpr=99 crt=43'1029 mlcod 0'0 active mbc={}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=76/77 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 136 handle_osd_map epochs [135,136], i have 136, src has [1,136]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136 pruub=9.064677238s) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 active pruub 297.917083740s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136 pruub=9.064624786s) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 297.917083740s@ mbc={}] exit Reset 0.000155 1 0.000286
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136 pruub=9.064624786s) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 297.917083740s@ mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136 pruub=9.064624786s) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 297.917083740s@ mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136 pruub=9.064624786s) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 297.917083740s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136 pruub=9.064624786s) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 297.917083740s@ mbc={}] exit Start 0.000015 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136 pruub=9.064624786s) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 297.917083740s@ mbc={}] enter Started/Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=76/76 les/c/f=77/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.014181 5 0.000770
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000129 1 0.000104
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000544 1 0.000035
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.036356 2 0.000058
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 136 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.071069 3 0.000108
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.071166 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=136) [0] r=-1 lpr=136 pi=[99,136)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] exit Reset 0.000093 1 0.000169
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] exit Start 0.000009 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.019638 1 0.000148
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active 0.071443 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary 1.080187 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started 1.080226 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[76,135)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137 pruub=15.942566872s) [0] async=[0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 43'1029 active pruub 304.866455078s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137 pruub=15.942464828s) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 304.866455078s@ mbc={}] exit Reset 0.000163 1 0.000237
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137 pruub=15.942464828s) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 304.866455078s@ mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137 pruub=15.942464828s) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 304.866455078s@ mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137 pruub=15.942464828s) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 304.866455078s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137 pruub=15.942464828s) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 304.866455078s@ mbc={}] exit Start 0.000018 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137 pruub=15.942464828s) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 304.866455078s@ mbc={}] enter Started/Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003005 2 0.000055
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000059 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 137 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:40.732284+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 720896 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008372 3 0.000161
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.011569 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=99/99 les/c/f=100/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.012377 5 0.000718
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000159 1 0.000061
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001132 1 0.000033
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025937 7 0.000194
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.040455 2 0.000059
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.040089 1 0.000035
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1e( v 43'1029 (0'0,43'1029] local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1e( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137) [0] r=-1 lpr=137 DELETING pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044850 2 0.000370
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1e( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.085023 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 138 pg[9.1e( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=135/136 n=5 ec=55/37 lis/c=135/76 les/c/f=136/77/0 sis=137) [0] r=-1 lpr=137 pi=[76,137)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.111025 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:41.732475+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 589824 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.971185 1 0.000194
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary/Active 1.025918 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started/Primary 2.037512 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] exit Started 2.037565 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[99,137)/1 crt=43'1029 mlcod 43'1029 active+remapped mbc={255={}}] enter Reset
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139 pruub=14.986603737s) [0] async=[0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 43'1029 active pruub 305.947998047s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139 pruub=14.986359596s) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 305.947998047s@ mbc={}] exit Reset 0.000309 1 0.000408
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139 pruub=14.986359596s) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 305.947998047s@ mbc={}] enter Started
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139 pruub=14.986359596s) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 305.947998047s@ mbc={}] enter Start
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139 pruub=14.986359596s) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 305.947998047s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139 pruub=14.986359596s) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 305.947998047s@ mbc={}] exit Start 0.000036 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 139 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139 pruub=14.986359596s) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY pruub 305.947998047s@ mbc={}] enter Started/Stray
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:42.732619+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 589824 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919116 data_alloc: 218103808 data_used: 180224
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:43.732802+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 589824 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 140 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.577650 6 0.000216
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 140 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 140 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 140 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000273 1 0.000081
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 140 pg[9.1f( v 43'1029 (0'0,43'1029] local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 140 pg[9.1f( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139) [0] r=-1 lpr=139 DELETING pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.042317 3 0.000340
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 140 pg[9.1f( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.042665 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 pg_epoch: 140 pg[9.1f( v 43'1029 (0'0,43'1029] lb MIN local-lis/les=137/138 n=5 ec=55/37 lis/c=137/99 les/c/f=138/100/0 sis=139) [0] r=-1 lpr=139 pi=[99,139)/1 crt=43'1029 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.620422 0 0.000000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:44.732943+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 565248 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:45.733085+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 565248 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:46.733216+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 565248 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:47.733396+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 557056 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912075 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:48.733545+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 557056 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:49.733682+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 557056 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:50.733834+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 548864 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:51.734025+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 540672 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:52.734248+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 532480 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912075 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:53.734364+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 532480 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:54.734531+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 532480 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:55.734779+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 524288 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:56.734938+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 524288 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:57.735161+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 516096 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912075 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:58.735315+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 507904 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:15:59.735471+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 499712 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:00.735654+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 499712 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:01.735799+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 491520 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:02.735952+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 491520 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912075 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:03.736033+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 483328 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:04.736182+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 483328 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:05.736340+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 475136 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba95547800 session 0x55ba9772dc20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:06.737361+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 475136 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:07.737592+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 475136 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912075 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:08.737856+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 475136 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:09.738103+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 466944 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:10.738370+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 466944 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:11.738602+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 458752 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:12.738831+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 458752 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912075 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:13.738959+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 450560 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:14.739188+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 442368 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:15.739447+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 442368 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:16.739724+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 425984 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:17.740062+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 425984 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912075 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:18.740210+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 425984 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:19.740431+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 417792 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:20.740879+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 417792 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:21.741014+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 409600 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:22.741271+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 409600 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912075 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:23.741440+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 409600 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:24.741624+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 401408 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:25.741829+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.170650482s of 47.285911560s, submitted: 51
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0b000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 393216 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:26.742128+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 385024 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:27.742279+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 385024 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911484 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:28.742570+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 368640 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:29.742777+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 360448 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:30.743243+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 360448 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:31.743426+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 352256 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:32.743830+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 344064 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910053 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:33.744063+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 344064 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:34.744276+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 344064 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:35.744478+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 335872 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:36.744667+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 335872 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:37.744825+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 327680 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910053 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:38.744968+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 327680 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:39.745114+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 319488 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:40.745272+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 319488 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:41.745423+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 319488 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:42.745578+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 311296 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910053 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:43.745763+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 311296 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:44.745915+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 311296 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:45.746077+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 303104 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:46.746237+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 303104 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:47.746415+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 294912 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910053 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:48.746572+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 294912 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:49.746709+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 294912 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:50.746874+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 286720 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:51.747080+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 286720 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:52.747304+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 278528 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910053 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:53.747492+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 278528 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:54.747647+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 270336 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:55.747808+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 270336 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:56.747949+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 270336 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:57.748113+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 262144 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910053 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:58.748323+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 262144 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:16:59.748484+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 262144 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:00.748647+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 245760 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:01.748835+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 245760 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:02.748988+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba96f98800 session 0x55ba973a5680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97136800 session 0x55ba973434a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 237568 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910053 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:03.749173+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 237568 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:04.749379+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 237568 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:05.749567+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 221184 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:06.749701+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 221184 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:07.749911+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 221184 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910053 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:08.750041+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 212992 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:09.750188+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97137000 session 0x55ba973a41e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97137800 session 0x55ba95148780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 212992 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:10.750406+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 212992 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:11.750570+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 204800 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:12.750817+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 204800 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910053 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:13.751069+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 204800 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:14.751254+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 196608 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:15.751454+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 196608 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:16.751695+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 188416 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 51.862396240s of 51.870761871s, submitted: 2
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:17.751851+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 188416 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911565 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:18.751979+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 180224 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:19.752115+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 180224 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:20.752320+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 180224 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:21.752869+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 172032 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:22.753065+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 172032 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911565 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:23.753215+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 172032 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:24.753371+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 163840 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:25.753558+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 237568 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:26.753784+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 237568 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba95547800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:27.753957+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 221184 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911565 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:28.754123+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 221184 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:29.754275+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 212992 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:30.754417+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 212992 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:31.754548+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 212992 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:32.754698+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 204800 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911565 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.845238686s of 15.852152824s, submitted: 1
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:33.754868+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 196608 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:34.755047+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 196608 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:35.755259+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 188416 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:36.755425+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 188416 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:37.755567+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 180224 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:38.755765+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 180224 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:39.755924+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 180224 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:40.756094+0000)
Feb 02 11:46:38 compute-0 nova_compute[251290]: 2026-02-02 11:46:38.124 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:46:38 compute-0 nova_compute[251290]: 2026-02-02 11:46:38.125 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:46:38 compute-0 nova_compute[251290]: 2026-02-02 11:46:38.125 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:46:38 compute-0 nova_compute[251290]: 2026-02-02 11:46:38.125 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:46:38 compute-0 nova_compute[251290]: 2026-02-02 11:46:38.125 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 172032 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:41.756288+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 172032 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:42.756453+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 172032 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:43.756777+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 163840 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:44.756974+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 163840 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:45.757178+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 155648 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:46.757343+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 147456 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:47.757489+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 147456 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:48.757647+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 139264 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:49.757798+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 139264 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:50.757953+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 131072 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:51.758121+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 131072 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:52.758337+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 131072 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:53.758581+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 122880 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:54.758838+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 122880 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:55.759023+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 114688 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:56.759273+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 114688 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:57.759455+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 114688 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:58.759624+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 114688 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:17:59.759827+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 106496 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:00.759997+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 106496 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:01.760182+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 106496 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:02.760359+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 98304 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:03.760546+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 98304 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:04.760775+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 98304 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:05.761085+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 90112 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:06.761248+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 90112 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:07.761431+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 81920 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:08.761582+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 81920 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:09.761805+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 65536 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:10.761993+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 57344 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:11.762147+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 49152 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:12.762299+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 40960 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:13.762497+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 40960 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:14.762694+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 40960 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:15.762936+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 32768 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:16.763111+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 32768 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:17.763292+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 32768 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:18.763502+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 24576 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:19.763656+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 24576 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:20.763848+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 16384 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:21.764029+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 16384 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:22.764222+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 16384 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:23.764387+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 8192 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:24.764543+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 8192 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:25.764749+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 8192 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:26.764922+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 0 heap: 73277440 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:27.765113+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1040384 heap: 74326016 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:28.765353+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1032192 heap: 74326016 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:29.765530+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1007616 heap: 74326016 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:30.765725+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1007616 heap: 74326016 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:31.765921+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 999424 heap: 74326016 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:32.766077+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910974 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 999424 heap: 74326016 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:33.766321+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 999424 heap: 74326016 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:34.766527+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 62.021350861s of 62.026927948s, submitted: 2
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 991232 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:35.766716+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 991232 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:36.766907+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 983040 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:37.767060+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 983040 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:38.767308+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 983040 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:39.767533+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 974848 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:40.767733+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:41.767970+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 974848 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:42.768198+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 974848 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:43.768359+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 974848 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:44.768511+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 966656 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:45.768718+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 958464 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:46.768893+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 958464 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:47.769048+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 958464 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:48.769230+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 950272 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:49.769434+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 950272 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:50.769592+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 925696 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:51.769767+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 925696 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:52.769993+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 925696 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:53.770175+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 917504 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:54.770349+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 917504 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:55.770566+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 909312 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:56.770773+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 901120 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:57.770998+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 901120 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:58.771160+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 892928 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:18:59.771341+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 892928 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:00.771463+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 892928 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:01.771672+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 884736 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:02.772051+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 884736 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:03.772214+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 884736 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:04.772341+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 876544 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:05.772525+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 876544 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:06.772688+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 868352 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:07.772846+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 868352 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:08.773057+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 868352 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:09.773265+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 860160 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:10.773578+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 860160 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:11.773814+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 851968 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:12.773971+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 851968 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:13.774140+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 851968 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:14.774436+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 843776 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:15.774656+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 835584 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:16.774859+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 835584 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:17.775091+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 827392 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:18.775280+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 827392 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:19.775450+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 827392 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:20.775626+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 819200 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:21.775841+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 819200 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:22.776032+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 811008 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:23.776214+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 811008 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:24.776327+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 811008 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:25.776511+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 802816 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:26.776706+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 802816 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:27.776873+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 802816 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:28.777113+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 794624 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:29.777309+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 794624 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:30.777491+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 794624 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:31.777656+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 786432 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:32.777820+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 786432 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:33.777974+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 778240 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:34.778121+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 778240 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:35.778323+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 778240 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:36.778506+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 778240 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:37.778865+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 770048 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:38.779019+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 778240 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:39.779240+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 770048 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:40.779438+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 761856 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:41.779642+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 761856 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:42.779911+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 753664 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:43.780120+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 753664 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:44.780321+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 745472 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:45.780559+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 745472 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:46.780793+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 737280 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:47.781040+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 737280 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:48.781251+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912486 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 729088 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:49.781498+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 720896 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:50.781856+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 720896 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:51.782069+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 720896 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:52.782244+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 712704 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 77.918144226s of 77.951164246s, submitted: 1
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:53.782467+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 712704 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:54.782691+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 712704 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:55.782978+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 704512 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:56.783205+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 704512 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:57.783407+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 696320 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:58.783587+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 696320 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:19:59.783784+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 688128 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:00.783989+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 679936 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:01.784181+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 679936 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:02.784478+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 671744 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:03.784657+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 671744 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:04.784864+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 671744 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:05.785107+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 663552 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:06.785329+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 663552 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:07.785507+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 663552 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:08.785715+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 655360 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:09.785945+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 655360 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:10.786147+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 655360 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:11.786369+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 647168 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:12.786627+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 647168 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:13.786866+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 638976 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:14.787106+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 638976 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:15.787323+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 630784 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:16.787520+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 630784 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:17.787693+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 630784 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:18.787922+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 622592 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:19.789073+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 622592 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:20.789305+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 622592 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:21.789567+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 614400 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:22.789807+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 614400 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:23.789976+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 606208 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:24.790143+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 606208 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:25.790359+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 598016 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:26.790558+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 598016 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:27.790730+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 589824 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:28.791021+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 589824 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:29.791216+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 581632 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:30.791444+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 581632 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:31.791670+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 581632 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:32.791974+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 573440 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:33.792186+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 573440 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:34.792480+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 565248 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:35.792708+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 565248 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:36.793080+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 565248 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:37.793237+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 557056 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:38.793463+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 557056 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:39.793632+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 557056 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:40.793843+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 540672 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:41.794067+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 540672 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:42.794359+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 540672 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:43.794555+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba95547800 session 0x55ba97e4b2c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 532480 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:44.794813+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 532480 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:45.795159+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 516096 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:46.795371+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 516096 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:47.795600+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 507904 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:48.795886+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 507904 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:49.796104+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 507904 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:50.796307+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 499712 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6346 writes, 26K keys, 6346 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6346 writes, 1090 syncs, 5.82 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6346 writes, 26K keys, 6346 commit groups, 1.0 writes per commit group, ingest: 19.64 MB, 0.03 MB/s
                                           Interval WAL: 6346 writes, 1090 syncs, 5.82 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:51.796515+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 434176 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:52.796728+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 434176 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:53.796971+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 425984 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:54.797158+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 425984 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:55.797465+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 425984 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:56.797843+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 417792 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:57.798136+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 417792 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:58.798468+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 417792 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:20:59.798694+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 409600 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:00.798888+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 409600 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:01.799059+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 401408 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:02.799254+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 401408 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:03.799537+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 401408 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:04.799838+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 393216 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:05.800072+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 393216 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:06.800267+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 393216 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:07.800439+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 393216 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:08.800618+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 385024 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:09.800798+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 385024 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:10.801092+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 376832 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:11.801248+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 368640 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:12.801493+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 360448 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:13.801682+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911895 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 360448 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba96260400 session 0x55ba97e31680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:14.801926+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 360448 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:15.802160+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 352256 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:16.802457+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 352256 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:17.802719+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 352256 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 85.081428528s of 85.194946289s, submitted: 1
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:18.802952+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913407 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 335872 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:19.803136+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 335872 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:20.803336+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 327680 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:21.803469+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 311296 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:22.803635+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 311296 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:23.803824+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913407 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 303104 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:24.804091+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 303104 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:25.804285+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 303104 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:26.804498+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 294912 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:27.804662+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 294912 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:28.804825+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913407 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 294912 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:29.805007+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 286720 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:30.805146+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.633048058s of 12.637918472s, submitted: 1
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 286720 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:31.805334+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 278528 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:32.805523+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 278528 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:33.805665+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914919 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 278528 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:34.805875+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 270336 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:35.806090+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 270336 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:36.806293+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 262144 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:37.806493+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 262144 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:38.806826+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914328 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 262144 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:39.806964+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 253952 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:40.807119+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 253952 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:41.807229+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 253952 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:42.807371+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 245760 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:43.807511+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914328 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 245760 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:44.807709+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 237568 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:45.807949+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 237568 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:46.808114+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 237568 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:47.808270+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 229376 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:48.808489+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914328 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 229376 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:49.808659+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 221184 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:50.808981+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 221184 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:51.809126+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 221184 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:52.809344+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 221184 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:53.809523+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914328 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 212992 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:54.809666+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 212992 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:55.809883+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 204800 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:56.810011+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 204800 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:57.810203+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 204800 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:58.810426+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914328 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 196608 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:21:59.810575+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 188416 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:00.810822+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 188416 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:01.810968+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 188416 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:02.811112+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 180224 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:03.811291+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914328 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 180224 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:04.811465+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 172032 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:05.811710+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 172032 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:06.811949+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 172032 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:07.812131+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 163840 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:08.812331+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914328 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 163840 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:09.812546+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 155648 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:10.812699+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 155648 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:11.812823+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 155648 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:12.813016+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 147456 heap: 75374592 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:13.813175+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.701942444s of 42.708469391s, submitted: 2
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914400 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1105920 heap: 76423168 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:14.813307+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1024000 heap: 76423168 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:15.813473+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 811008 heap: 77471744 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:16.813630+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 1802240 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:17.813893+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 1802240 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:18.814037+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914328 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 1802240 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:19.814219+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 1802240 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:20.814396+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 1794048 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:21.814606+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 1794048 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:22.814798+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 1794048 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:23.814948+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915840 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 1794048 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:24.815120+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 1794048 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:25.815292+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 1794048 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:26.815460+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.363644600s of 12.950447083s, submitted: 253
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 1794048 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:27.815592+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 1785856 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:28.815813+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 1761280 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:29.816023+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 1761280 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:30.816192+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 1761280 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:31.816393+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 1761280 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:32.816605+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 1761280 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:33.816825+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 1761280 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:34.817047+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 1761280 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:35.817280+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 1753088 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:36.817675+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 1736704 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:37.817838+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 1736704 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:38.818039+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 1728512 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:39.818257+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 1728512 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:40.818389+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:41.818619+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 1728512 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:42.818843+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 1720320 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:43.819222+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 1720320 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:44.819383+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 1703936 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:45.819608+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 1703936 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:46.819802+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 1695744 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:47.819954+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 1687552 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:48.820122+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 1687552 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:49.820306+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 1679360 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:50.821282+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 1679360 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:51.821460+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 1679360 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:52.821657+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 1671168 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:53.821869+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 1671168 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:54.822095+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:55.822349+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:56.822562+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:57.822720+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:58.822934+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:22:59.823131+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:00.823347+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:01.823498+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:02.823696+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:03.823830+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:04.823954+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:05.824127+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:06.824294+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:07.824489+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:08.824674+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba96f98c00 session 0x55ba975265a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:09.824940+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:10.825152+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:11.825306+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:12.825454+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:13.825659+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:14.825839+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:15.826078+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 1662976 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:16.826237+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:17.826390+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:18.826577+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:19.826787+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915249 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:20.826943+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:21.827145+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:22.827311+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.443744659s of 56.449897766s, submitted: 1
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:23.827480+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:24.827648+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916761 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:25.827826+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:26.827938+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:27.828068+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:28.828219+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:29.828416+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916170 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:30.828605+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:31.828823+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:32.828980+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:33.829146+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:34.829313+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916170 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:35.829494+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:36.829649+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:37.829837+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:38.829982+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:39.830112+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916170 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:40.830257+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:41.830389+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:42.830559+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:43.830720+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:44.830898+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916170 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:45.831361+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:46.831543+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:47.831735+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1654784 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:48.831971+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:49.832140+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916170 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:50.832385+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:51.832568+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:52.832805+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:53.832975+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:54.833181+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916170 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:55.833411+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:56.833545+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:57.833696+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:58.833851+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:23:59.833987+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916170 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:00.834128+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:01.834344+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:02.834553+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:03.835790+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:04.835981+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916170 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:05.836198+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:06.836495+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:07.836663+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:08.836822+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:09.836973+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916170 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:10.837530+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:11.837792+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:12.837957+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1646592 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:13.838105+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:14.838267+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916170 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:15.838454+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:16.838686+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:17.838874+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:18.839020+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.059322357s of 56.078437805s, submitted: 2
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:19.839179+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917682 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:20.839922+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:21.840140+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:22.840296+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:23.840487+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:24.840822+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917682 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:25.841021+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1638400 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:26.841186+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1630208 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:27.841341+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1630208 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:28.841708+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1630208 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:29.841990+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917091 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1630208 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:30.842155+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1630208 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:31.842329+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1630208 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:32.842571+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1630208 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:33.842723+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1630208 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:34.842863+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917091 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1630208 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:35.843079+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97137400 session 0x55ba981665a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1630208 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:36.843272+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:37.843431+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:38.843884+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:39.844038+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917091 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:40.844188+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:41.844330+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:42.844465+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:43.844595+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:44.844717+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917091 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:45.844907+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:46.845071+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:47.845239+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:48.845394+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:49.845567+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.595247269s of 30.605144501s, submitted: 2
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918603 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:50.845714+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:51.845915+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:52.846055+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:53.846412+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:54.846560+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920115 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:55.846781+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:56.846948+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:57.847133+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:58.847296+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:59.847438+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:00.847612+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:01.847868+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:02.848053+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:03.848215+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:04.848385+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:05.848580+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:06.848798+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:07.849013+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:08.849168+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:09.849336+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:10.849470+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:11.849613+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:12.849780+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:13.849939+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:14.850091+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:15.850306+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:16.850452+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:17.850618+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:18.850823+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:19.851125+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:20.851307+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:21.851461+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:22.851650+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:23.852044+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:24.852193+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:25.852380+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:26.852537+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:27.852729+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:28.853122+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:29.853294+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:30.853468+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:31.853609+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:32.853773+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:33.853930+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:34.854099+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:35.854282+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:36.854443+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:37.854709+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:38.854829+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:39.855035+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:40.855159+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:41.855343+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:42.855511+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:43.855668+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:44.855843+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:45.856066+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:46.856236+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:47.856405+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:48.856565+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:49.856775+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:50.856935+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:51.857068+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:52.857321+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:53.857473+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:54.857617+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:55.857806+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:56.857950+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:57.858123+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:58.858290+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:59.858477+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:00.858612+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:01.858899+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:02.859146+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:03.859312+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:04.859490+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:05.859772+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:06.860017+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:07.860214+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:08.860475+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:09.860703+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:10.860812+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:11.861046+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:12.861270+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:13.861515+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:14.861667+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:15.861834+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:16.861987+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:17.862235+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:18.862419+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:19.862686+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:20.862865+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:21.863085+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:22.863253+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:23.863418+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:24.863621+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:25.863816+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:26.864008+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:27.864157+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:28.864304+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:29.864490+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:30.864648+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:31.865033+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:32.865204+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:33.865333+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:34.865489+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:35.865770+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:36.865949+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:37.866113+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:38.866250+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:39.866397+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:40.866562+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:41.866701+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:42.866854+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:43.867037+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:44.867170+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:45.867394+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:46.867629+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:47.867822+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:48.868073+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:49.868237+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:50.868420+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:51.868600+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:52.868930+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:53.869100+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:54.869295+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:55.869511+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:57.025521+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:58.025698+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:59.025883+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:00.026089+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:01.026303+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:02.026531+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:03.026803+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:04.027026+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:05.027202+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:06.027499+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:07.027642+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:08.027832+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:09.027991+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:10.028137+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:11.028270+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:12.028420+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:13.028601+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:14.028799+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 144.730377197s of 144.760253906s, submitted: 3
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:15.028939+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:16.029175+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:17.029345+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:18.029563+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:19.029729+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:20.029905+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:21.030067+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:22.030442+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:23.030565+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:24.030725+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:25.031142+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:26.031372+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1515520 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:27.031547+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:28.031801+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1515520 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:29.031982+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1515520 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:30.032162+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1515520 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:31.032429+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 1499136 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:32.032644+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:33.032837+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:34.033112+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:35.033280+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:36.033503+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:37.033674+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:38.033908+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:39.034059+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:40.034312+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:41.034464+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:42.034619+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:43.034794+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:44.034929+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:45.035075+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:46.035248+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:47.035409+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:48.035583+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba95547400 session 0x55ba96206960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba95547800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:49.035749+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 1482752 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:50.035886+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 1482752 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:51.036030+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:52.036225+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:53.036491+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:54.036669+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:55.036906+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:56.037184+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:57.037363+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:58.037550+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:59.037753+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:00.037920+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:01.038087+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:02.038247+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:03.038394+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:04.038565+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:05.038733+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:06.038954+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:07.039111+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:08.039332+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:09.039470+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:10.039611+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:11.039830+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:12.040003+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:13.040218+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:14.040371+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:15.040508+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:16.040669+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:17.040819+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:18.040980+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:19.041099+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:20.041348+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:21.041473+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:22.041621+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:23.041709+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:24.041863+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:25.042112+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:26.042419+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:27.042623+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:28.042864+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:29.043073+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:30.043254+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:31.043492+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:32.043723+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:33.043991+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:34.044169+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:35.044452+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:36.044692+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:37.044907+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:38.045088+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:39.045258+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:40.045405+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:41.045617+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:42.045791+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:43.045942+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:44.046096+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:45.046310+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:46.046526+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:47.046680+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:48.047408+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:49.047609+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:50.047793+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:51.047994+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 1409024 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:52.048236+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 1409024 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:53.048420+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 1409024 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:54.048582+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:55.048840+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:56.049022+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:57.049244+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:58.049396+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:59.049579+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:00.049759+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: mgrc ms_handle_reset ms_handle_reset con 0x55ba95546800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3082357126
Feb 02 11:46:38 compute-0 ceph-osd[83123]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3082357126,v1:192.168.122.100:6801/3082357126]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: get_auth_request con 0x55ba96fcc000 auth_method 0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: mgrc handle_mgr_configure stats_period=5
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:01.049990+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba96282400 session 0x55ba97769680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:02.050231+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:03.050465+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:04.050707+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:05.051113+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:06.051449+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:07.051692+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:08.051919+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:09.052086+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:10.052331+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:11.052568+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:12.052843+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:13.053068+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:14.053299+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:15.053540+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:16.053816+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:17.054047+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:18.054300+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:19.054554+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:20.054847+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:21.055110+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:22.055292+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:23.055539+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:24.055799+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:25.056012+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:26.056256+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:27.056422+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:28.056701+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:29.056897+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:30.057072+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:31.057299+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:32.057495+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:33.057702+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:34.057974+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:35.058147+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:36.058330+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:37.058522+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:38.058785+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:39.059008+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:40.059234+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:41.059455+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:42.059648+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:43.059900+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:44.060079+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:45.060239+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:46.060481+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:47.060655+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:48.060863+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:49.061086+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:50.061294+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97136000 session 0x55ba97e4a5a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:51.061471+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:52.061688+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:53.061897+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:54.062121+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:55.062835+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:56.063064+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:57.063277+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:58.063469+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:59.063696+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:00.063855+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:01.064011+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:02.064207+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:03.064444+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:04.064656+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:05.064827+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:06.065101+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:07.065340+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 172.568267822s of 172.578765869s, submitted: 2
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:08.065578+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:09.065828+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:10.066013+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:11.066379+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919854 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:12.066628+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:13.066853+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:14.067107+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:15.067363+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:16.067618+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922878 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:17.067840+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:18.068094+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:19.068297+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:20.068450+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba96f98c00 session 0x55ba98009680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:21.068603+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922878 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:22.068831+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:23.069021+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:24.069253+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:25.069489+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:26.069816+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922878 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:27.070084+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:28.070259+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:29.070514+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:30.070701+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:31.070906+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922878 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:32.071156+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:33.071400+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:34.071621+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.114982605s of 27.133071899s, submitted: 3
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:35.071830+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:36.072074+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924390 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:37.074243+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:38.074836+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:39.076517+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:40.078712+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:41.079698+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923208 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:42.080563+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:43.082003+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:44.084276+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:45.085360+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97137400 session 0x55ba96206780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:46.086413+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923208 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:47.086725+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:48.087437+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:49.088208+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:50.089211+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:51.089839+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6868 writes, 27K keys, 6868 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6868 writes, 1340 syncs, 5.13 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 522 writes, 837 keys, 522 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 522 writes, 250 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923208 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:52.090184+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:53.090817+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:54.091649+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:55.092216+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:56.092722+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923208 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:57.092967+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:58.093324+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.184877396s of 24.231876373s, submitted: 3
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:59.093634+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:00.093977+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:01.094280+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924720 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:02.094633+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:03.094842+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:04.095043+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:05.095252+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:06.095467+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924720 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:07.095625+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:08.095854+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:09.096012+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:10.096162+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:11.096425+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:12.096651+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:13.096804+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:14.096981+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:15.097217+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:16.097455+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:17.097650+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:18.097887+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:19.098157+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:20.098331+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97137800 session 0x55ba97e4b0e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:21.098479+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:22.098663+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:23.098868+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:24.099199+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:25.099394+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:26.099656+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97136400 session 0x55ba98026960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:27.099887+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:28.100166+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:29.100384+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:30.100533+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:31.100807+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:32.101015+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:33.101218+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:34.101399+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:35.101641+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:36.101917+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:37.102086+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.840103149s of 38.850219727s, submitted: 2
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:38.102251+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:39.102492+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:40.102802+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:41.104871+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:42.105861+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925641 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:43.107171+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:44.107621+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:45.108014+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:46.108234+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:47.108649+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925050 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:48.109016+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:49.109338+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.065451622s of 12.071977615s, submitted: 2
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:50.109628+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:51.109838+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:52.110138+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924459 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:53.110406+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:54.110631+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:55.110868+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread fragmentation_score=0.000024 took=0.000113s
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:56.111157+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:57.111423+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924459 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:58.111794+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:59.112016+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:00.112210+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:01.112334+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:02.112620+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924459 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:03.112737+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:04.113065+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:05.113279+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:06.113572+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:07.113775+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924459 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:08.113969+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:09.114142+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.164722443s of 20.168237686s, submitted: 1
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:10.114291+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:11.114495+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:12.117135+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:13.117360+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:14.117575+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 204800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:15.117803+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 73728 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:16.118081+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 2023424 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:17.118335+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:18.118625+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:19.118890+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:20.119159+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:21.119398+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:22.119685+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:23.120009+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:24.120382+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:25.120795+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:26.121020+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:27.121380+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:28.121547+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:29.121866+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:30.122112+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:31.122313+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:32.122518+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:33.122723+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:34.123014+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:35.123350+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:36.123611+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1990656 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:37.123830+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1990656 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:38.124110+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:39.124259+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:40.124698+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:41.125062+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:42.125265+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba96f98c00 session 0x55ba97d430e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:43.125579+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:44.126186+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:45.126717+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:46.127068+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:47.127280+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:48.127563+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:49.127848+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:50.128091+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:51.128364+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:52.128630+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:53.128845+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:54.129054+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:55.129249+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:56.136028+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.293338776s of 46.293270111s, submitted: 250
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:57.136244+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927483 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:58.136443+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:59.136642+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:00.136800+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:01.137244+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:02.137413+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928995 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:03.137653+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:04.137840+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:05.138075+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:06.138316+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:07.138577+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928404 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:08.138829+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:09.138993+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:10.139220+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:11.139474+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:12.139688+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928404 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:13.139870+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:14.140066+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:15.140901+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:16.141087+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:17.141284+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928404 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:18.141457+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:19.141657+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:20.141822+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:21.141969+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:22.142119+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928404 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:23.142342+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [5])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:24.142539+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:25.142704+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:26.143003+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:27.143295+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928404 data_alloc: 218103808 data_used: 176128
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:28.143557+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.104373932s of 32.233356476s, submitted: 3
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:29.143850+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 1867776 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:30.144069+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _renew_subs
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 143 ms_handle_reset con 0x55ba97137800 session 0x55ba97e30000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 1835008 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:31.144239+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9ff000/0x0/0x4ffc00000, data 0x16aae9/0x21a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 18546688 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:32.144397+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc201000/0x0/0x4ffc00000, data 0x96aaf9/0xa1b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 144 ms_handle_reset con 0x55ba97136800 session 0x55ba974eb860
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001895 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 18513920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:33.144568+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 18513920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:34.144705+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 18513920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:35.144797+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc1fd000/0x0/0x4ffc00000, data 0x96cc01/0xa1e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc1fd000/0x0/0x4ffc00000, data 0x96cc01/0xa1e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 18513920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:36.144966+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 18513920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:37.145123+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004653 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:38.145356+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:39.145618+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:40.145860+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 ms_handle_reset con 0x55ba97137000 session 0x55ba973a54a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fa000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:41.146096+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:42.146347+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004653 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:43.146594+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fa000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:44.146761+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:45.146917+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:46.147124+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:47.147379+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004653 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:48.147642+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:49.147867+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fa000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:50.149129+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:51.149869+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:52.150507+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004653 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:53.151700+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fa000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:54.152362+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.817378998s of 26.077272415s, submitted: 73
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:55.152636+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:56.153489+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:57.153950+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006165 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:58.154383+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fa000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:59.155029+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:00.155194+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:01.155415+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:02.155674+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004143 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:03.155940+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:04.156416+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:05.156932+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:06.157155+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:07.157430+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004143 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:08.157716+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:09.158046+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:10.158195+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:11.158666+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:12.159107+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004143 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:13.159369+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:14.159627+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:15.159773+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:16.160041+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:17.160226+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004143 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:18.160434+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.596820831s of 24.609909058s, submitted: 3
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 ms_handle_reset con 0x55ba96283000 session 0x55ba972061e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:19.160668+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 17440768 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 ms_handle_reset con 0x55ba96283000 session 0x55ba978ae3c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:20.160836+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 17424384 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:21.161033+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 17416192 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _renew_subs
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba96f98c00 session 0x55ba971c50e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba97136800 session 0x55ba986d4780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba97137000 session 0x55ba986d4960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba97137800 session 0x55ba986d4b40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba96283000 session 0x55ba986d4d20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:22.161221+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 15163392 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064523 data_alloc: 218103808 data_used: 184320
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fbb50000/0x0/0x4ffc00000, data 0x1014e12/0x10ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:23.161394+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 15163392 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:24.161809+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 15163392 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:25.161948+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 15163392 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba96f98c00 session 0x55ba986d4f00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fbb50000/0x0/0x4ffc00000, data 0x1014e12/0x10ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:26.162118+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 15384576 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fbb50000/0x0/0x4ffc00000, data 0x1014e12/0x10ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:27.163035+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 15384576 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083779 data_alloc: 218103808 data_used: 2904064
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:28.163401+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 11911168 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:29.164316+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 9338880 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:30.164478+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 9338880 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fbb2e000/0x0/0x4ffc00000, data 0x1038e12/0x10ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:31.164666+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 9297920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _renew_subs
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.401331902s of 12.604059219s, submitted: 31
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:32.165368+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 9297920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbb2a000/0x0/0x4ffc00000, data 0x103ade4/0x10f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114905 data_alloc: 218103808 data_used: 6942720
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:33.165930+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9265152 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:34.166344+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9265152 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:35.166530+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9265152 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:36.166995+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9265152 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:37.167460+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9265152 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184561 data_alloc: 218103808 data_used: 6963200
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:38.167636+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbb2a000/0x0/0x4ffc00000, data 0x103ade4/0x10f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 6922240 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:39.168014+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91561984 unmapped: 6897664 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:40.168358+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 6176768 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:41.168660+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92291072 unmapped: 6168576 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:42.169198+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92291072 unmapped: 6168576 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202095 data_alloc: 218103808 data_used: 7163904
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:43.169409+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92291072 unmapped: 6168576 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb0aa000/0x0/0x4ffc00000, data 0x1abbde4/0x1b72000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:44.169660+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92291072 unmapped: 6168576 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:45.169887+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.370252609s of 13.598746300s, submitted: 78
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb089000/0x0/0x4ffc00000, data 0x1adcde4/0x1b93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:46.170302+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb089000/0x0/0x4ffc00000, data 0x1adcde4/0x1b93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:47.170562+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198767 data_alloc: 218103808 data_used: 7163904
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:48.170798+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:49.171589+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:50.171778+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb089000/0x0/0x4ffc00000, data 0x1adcde4/0x1b93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:51.172022+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:52.172183+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199039 data_alloc: 218103808 data_used: 7163904
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:53.172439+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:54.172682+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:55.172865+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9802e1e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97e5f0e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261c00 session 0x55ba9772c000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91799552 unmapped: 6660096 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.791528702s of 10.809786797s, submitted: 4
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba98100000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:56.173116+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb080000/0x0/0x4ffc00000, data 0x1ae5de4/0x1b9c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba98008780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91488256 unmapped: 6971392 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96283000 session 0x55ba97e31a40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:57.173378+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96f98c00 session 0x55ba97342000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97139800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97139800 session 0x55ba97527e00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97e4af00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 9969664 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9776c000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96283000 session 0x55ba94968780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:58.173534+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243195 data_alloc: 218103808 data_used: 7168000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 9986048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:59.173702+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96f98c00 session 0x55ba978acb40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 9986048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:00.173831+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fabb3000/0x0/0x4ffc00000, data 0x1fb2de4/0x2069000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 9986048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97139400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97139400 session 0x55ba97e5fc20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:01.174176+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 9986048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fabb3000/0x0/0x4ffc00000, data 0x1fb2de4/0x2069000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97139400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97139400 session 0x55ba97343860
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba973a41e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:02.174360+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 9641984 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:03.174608+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248541 data_alloc: 218103808 data_used: 7299072
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 4980736 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:04.174850+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96747520 unmapped: 4866048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:05.175079+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96747520 unmapped: 4866048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:06.175288+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96747520 unmapped: 4866048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:07.175505+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fab8e000/0x0/0x4ffc00000, data 0x1fd6df4/0x208e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 4833280 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:08.175681+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280005 data_alloc: 234881024 data_used: 12001280
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 4833280 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:09.175882+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 4833280 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.933655739s of 14.016370773s, submitted: 18
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:10.176034+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba9772de00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 4808704 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:11.176211+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96862208 unmapped: 4751360 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:12.176372+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96862208 unmapped: 4751360 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:13.176503+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280261 data_alloc: 234881024 data_used: 12001280
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1fd9df4/0x2091000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 4702208 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:14.176676+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 8257536 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:15.176846+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 7823360 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:16.177019+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f8c40000/0x0/0x4ffc00000, data 0x2974df4/0x2a2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 7553024 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:17.177126+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 7544832 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:18.177315+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370149 data_alloc: 234881024 data_used: 12378112
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101130240 unmapped: 7536640 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:19.177468+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101130240 unmapped: 7536640 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29fbdf4/0x2ab3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:20.177612+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101130240 unmapped: 7536640 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29fbdf4/0x2ab3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:21.177822+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.416647911s of 11.216951370s, submitted: 81
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 8404992 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f8b9a000/0x0/0x4ffc00000, data 0x2a1adf4/0x2ad2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:22.177961+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 8404992 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f8b97000/0x0/0x4ffc00000, data 0x2a1ddf4/0x2ad5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:23.178180+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1366021 data_alloc: 234881024 data_used: 12378112
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 8404992 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:24.178324+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 8404992 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:25.178466+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 8396800 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba94bd2b40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96283000 session 0x55ba9732ef00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:26.178669+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 11558912 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97ea8f00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:27.178823+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:28.178991+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214118 data_alloc: 218103808 data_used: 7176192
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac7000/0x0/0x4ffc00000, data 0x1aeede4/0x1ba5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac2000/0x0/0x4ffc00000, data 0x1af3de4/0x1baa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:29.179147+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:30.179300+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:31.179468+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:32.179600+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136800 session 0x55ba986d50e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137000 session 0x55ba98008d20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.436764717s of 11.524084091s, submitted: 27
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac2000/0x0/0x4ffc00000, data 0x1af3de4/0x1baa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [1])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:33.179812+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba9776ad20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039752 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:34.180079+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:35.180307+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:36.180589+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:37.180853+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:38.181116+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039752 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:39.181379+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:40.181551+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:41.181846+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:42.182101+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:43.182286+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039752 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:44.182478+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:45.182632+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:46.182887+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:47.183097+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:48.183325+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039752 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:49.183638+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16779 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:50.183907+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:51.184115+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:52.184391+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:53.184582+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039752 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:54.184844+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:55.185034+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:56.185235+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:57.185395+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97139400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.409597397s of 24.497409821s, submitted: 20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97139400 session 0x55ba971c41e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba96ff94a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba978ae3c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136800 session 0x55ba94bd21e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137000 session 0x55ba973a54a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:58.185614+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100641 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:59.185800+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:00.186652+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:01.186878+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:02.187402+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:03.188055+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100641 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97139400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:04.188539+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97139400 session 0x55ba94af10e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba978aed20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:05.188752+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 24117248 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:06.188941+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:07.189102+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:08.189287+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148805 data_alloc: 218103808 data_used: 7311360
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:09.191552+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:10.191859+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:11.192190+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:12.192389+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:13.192577+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148805 data_alloc: 218103808 data_used: 7311360
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:14.192839+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:15.193160+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:16.193382+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.047296524s of 19.149776459s, submitted: 23
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 17539072 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:17.193626+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97124352 unmapped: 18890752 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:18.193873+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228275 data_alloc: 218103808 data_used: 7340032
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ae0000/0x0/0x4ffc00000, data 0x1ad4de4/0x1b8b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:19.194196+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:20.194375+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:21.194826+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ae0000/0x0/0x4ffc00000, data 0x1ad4de4/0x1b8b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:22.194988+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:23.195209+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229787 data_alloc: 218103808 data_used: 7340032
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:24.195451+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:25.195658+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:26.195884+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:27.196077+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9adf000/0x0/0x4ffc00000, data 0x1ad6de4/0x1b8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba95f174a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:28.196315+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227747 data_alloc: 218103808 data_used: 7340032
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 17539072 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:29.196516+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.821294785s of 13.115832329s, submitted: 75
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 17539072 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:30.196698+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 17539072 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:31.196896+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:32.197085+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:33.197302+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ade000/0x0/0x4ffc00000, data 0x1ad7de4/0x1b8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227971 data_alloc: 218103808 data_used: 7340032
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:34.197652+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:35.197924+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ade000/0x0/0x4ffc00000, data 0x1ad7de4/0x1b8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:36.198399+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:37.198589+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:38.198759+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227651 data_alloc: 218103808 data_used: 7340032
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:39.198953+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.165346146s of 10.174480438s, submitted: 2
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba962072c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba973430e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 17522688 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97810c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:40.199132+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97810c00 session 0x55ba97206d20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:41.199283+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:42.199459+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:43.199677+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:44.199952+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:45.200237+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:46.200581+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:47.200764+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:48.201030+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:49.201205+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:50.201435+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:51.201591+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:52.201865+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:53.202082+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:54.202239+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:55.202392+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:56.202646+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:57.202829+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:58.203033+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:59.203193+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:00.203415+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:01.203609+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:02.203820+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:03.203968+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:04.204211+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:05.204380+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:06.204594+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:07.204814+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:08.205211+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:09.205398+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:10.205924+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.322380066s of 30.489206314s, submitted: 35
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba971c5a40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba971c4b40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 21839872 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:11.206107+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba94af0780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba96ff81e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97810800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97810800 session 0x55ba96ff8d20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97a2b0e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97a2a1e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 21757952 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:12.206395+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 21757952 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:13.206540+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa8a0000/0x0/0x4ffc00000, data 0xd14e46/0xdcc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083328 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa8a0000/0x0/0x4ffc00000, data 0xd14e46/0xdcc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:14.206836+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 21757952 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:15.207008+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 21757952 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:16.207217+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 21757952 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba986d4000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:17.207374+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 21602304 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:18.207555+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 21585920 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100276 data_alloc: 218103808 data_used: 2387968
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:19.207816+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:20.208021+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:21.208221+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:22.208405+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:23.208569+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111068 data_alloc: 218103808 data_used: 4001792
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:24.208773+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:25.209125+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:26.209438+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:27.209617+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:28.209900+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.837131500s of 18.049880981s, submitted: 30
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [1])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143858 data_alloc: 218103808 data_used: 4759552
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:29.210090+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97124352 unmapped: 18890752 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:30.210292+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 18022400 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:31.210544+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 18022400 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:32.210813+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98025472 unmapped: 17989632 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:33.210959+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98025472 unmapped: 17989632 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162424 data_alloc: 218103808 data_used: 4984832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:34.211285+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98025472 unmapped: 17989632 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:35.211526+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98025472 unmapped: 17989632 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:36.211843+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98025472 unmapped: 17989632 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:37.212066+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 17956864 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:38.212290+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163336 data_alloc: 218103808 data_used: 5054464
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:39.212524+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:40.212795+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:41.212971+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:42.213145+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:43.213331+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163336 data_alloc: 218103808 data_used: 5054464
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:44.213515+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:45.213789+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:46.213987+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:47.214258+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:48.214498+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163336 data_alloc: 218103808 data_used: 5054464
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:49.214679+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336400 session 0x55ba97baed20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba97500960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97500b40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97bae960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 17932288 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.843761444s of 21.162544250s, submitted: 64
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba97148780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336400 session 0x55ba97bae5a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba97526f00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba9776c780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba978adc20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:50.214928+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:51.215112+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:52.215278+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:53.215472+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d4000/0x0/0x4ffc00000, data 0x13e0e46/0x1498000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188922 data_alloc: 218103808 data_used: 5058560
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:54.215725+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:55.215963+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:56.216246+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:57.216467+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d4000/0x0/0x4ffc00000, data 0x13e0e46/0x1498000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba978ac000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:58.216627+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189887 data_alloc: 218103808 data_used: 5058560
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:59.552318+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.809700966s of 10.887388229s, submitted: 14
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 18735104 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:00.552639+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98459648 unmapped: 17555456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:01.552845+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98459648 unmapped: 17555456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:02.553085+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d1000/0x0/0x4ffc00000, data 0x13e1e69/0x149a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98459648 unmapped: 17555456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:03.553241+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204631 data_alloc: 218103808 data_used: 7233536
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98459648 unmapped: 17555456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:04.553382+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98459648 unmapped: 17555456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:05.553550+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d1000/0x0/0x4ffc00000, data 0x13e1e69/0x149a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 17522688 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:06.553898+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 17514496 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:07.554021+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 17514496 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:08.554153+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204631 data_alloc: 218103808 data_used: 7233536
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 17514496 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:09.554268+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d1000/0x0/0x4ffc00000, data 0x13e1e69/0x149a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.951482773s of 10.088466644s, submitted: 23
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101531648 unmapped: 14483456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:10.554391+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101834752 unmapped: 14180352 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:11.554533+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:12.554711+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:13.554889+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247231 data_alloc: 218103808 data_used: 7974912
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:14.555044+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ed1000/0x0/0x4ffc00000, data 0x16e2e69/0x179b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:15.555192+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:16.555391+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:17.555532+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 15826944 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:18.555733+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247247 data_alloc: 218103808 data_used: 7974912
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 15826944 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:19.555919+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336400 session 0x55ba9776d0e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba974cb680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ed1000/0x0/0x4ffc00000, data 0x16e2e69/0x179b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 15826944 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:20.556065+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.881438255s of 10.082806587s, submitted: 18
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba978af860
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 18006016 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:21.556191+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 18006016 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:22.556627+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa511000/0x0/0x4ffc00000, data 0x10a2e46/0x115a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 18006016 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:23.556797+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167145 data_alloc: 218103808 data_used: 5058560
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa511000/0x0/0x4ffc00000, data 0x10a2e46/0x115a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:24.557040+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 18006016 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba975010e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba94bd4780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:25.557221+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98017280 unmapped: 17997824 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba978abc20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:26.557461+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:27.557780+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:28.557940+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070223 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:29.558145+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:30.558301+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:31.558480+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:32.558641+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:33.558864+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070223 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:34.559124+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:35.559289+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:36.559494+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:37.559634+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:38.559840+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070223 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:39.560037+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:40.560167+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:41.560319+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:42.560479+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:43.560696+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.462673187s of 23.645784378s, submitted: 51
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145739 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9776a960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba972d6d20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba971c54a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba95149860
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:44.560807+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba974cb4a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d9000/0x0/0x4ffc00000, data 0x13dcde4/0x1493000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:45.560982+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:46.561448+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:47.561589+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:48.561757+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba97e4be00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba97d7a1e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145739 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:49.561898+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba971494a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba95e40d20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:50.562089+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 24715264 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d8000/0x0/0x4ffc00000, data 0x13dce07/0x1494000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d8000/0x0/0x4ffc00000, data 0x13dce07/0x1494000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:51.562315+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 24715264 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:52.562467+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:53.562635+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221661 data_alloc: 218103808 data_used: 7557120
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:54.562820+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:55.562969+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:56.563207+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d8000/0x0/0x4ffc00000, data 0x13dce07/0x1494000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:57.563366+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:58.563503+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221661 data_alloc: 218103808 data_used: 7557120
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:59.563646+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:00.563794+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:01.563936+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 22749184 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d8000/0x0/0x4ffc00000, data 0x13dce07/0x1494000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.684358597s of 17.943258286s, submitted: 19
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:02.564106+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101908480 unmapped: 17784832 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:03.564249+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0e000/0x0/0x4ffc00000, data 0x1a9ee07/0x1b56000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101285888 unmapped: 18407424 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275807 data_alloc: 218103808 data_used: 7553024
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:04.564424+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101285888 unmapped: 18407424 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:05.564966+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101285888 unmapped: 18407424 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:06.565228+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:07.565386+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:08.565539+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275823 data_alloc: 218103808 data_used: 7553024
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:09.565700+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:10.565842+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:11.565985+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:12.566137+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:13.566300+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275823 data_alloc: 218103808 data_used: 7553024
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:14.566535+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:15.566882+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:16.567335+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:17.567537+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:18.567714+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275823 data_alloc: 218103808 data_used: 7553024
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:19.567948+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:20.568138+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:21.568371+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:22.568565+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:23.568707+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275823 data_alloc: 218103808 data_used: 7553024
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:24.568932+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101367808 unmapped: 18325504 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:25.569128+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:26.569322+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:27.569504+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:28.569675+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275823 data_alloc: 218103808 data_used: 7553024
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:29.569857+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:30.570047+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:31.570232+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba9802f0e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336400 session 0x55ba95e40960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba95e405a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba972d65a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.683389664s of 29.841543198s, submitted: 44
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba97769c20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba977685a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336800 session 0x55ba981b3c20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba97206f00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba977692c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:32.570385+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 18104320 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9677000/0x0/0x4ffc00000, data 0x1f3de07/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:33.570564+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 18104320 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311921 data_alloc: 218103808 data_used: 7553024
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:34.570710+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101597184 unmapped: 18096128 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:35.570902+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101597184 unmapped: 18096128 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba972072c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:36.571059+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 18087936 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:37.571262+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 18063360 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:38.571427+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9675000/0x0/0x4ffc00000, data 0x1f3ee07/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336849 data_alloc: 234881024 data_used: 11227136
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:39.571553+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:40.571731+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:41.571939+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:42.572068+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:43.572223+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:44.572400+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336849 data_alloc: 234881024 data_used: 11227136
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9675000/0x0/0x4ffc00000, data 0x1f3ee07/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:45.572553+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9675000/0x0/0x4ffc00000, data 0x1f3ee07/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:46.572786+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:47.572992+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.598789215s of 15.770350456s, submitted: 11
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 14286848 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:48.573119+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 14286848 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:49.573564+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356043 data_alloc: 234881024 data_used: 11460608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105553920 unmapped: 14139392 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:50.573729+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93d7000/0x0/0x4ffc00000, data 0x21dce07/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:51.573946+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:52.574148+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:53.574304+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:54.574650+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359763 data_alloc: 234881024 data_used: 11460608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:55.574843+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93d7000/0x0/0x4ffc00000, data 0x21dce07/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93d7000/0x0/0x4ffc00000, data 0x21dce07/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:56.575033+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:57.575185+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93d7000/0x0/0x4ffc00000, data 0x21dce07/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:58.575359+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 14098432 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:59.575507+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359763 data_alloc: 234881024 data_used: 11460608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 14098432 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:00.575655+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba97a2a960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba97d7a960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 14098432 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.472229958s of 13.571630478s, submitted: 32
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93d7000/0x0/0x4ffc00000, data 0x21dce07/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:01.575837+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba974ebc20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:02.575990+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:03.576133+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:04.576293+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279871 data_alloc: 218103808 data_used: 7553024
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:05.576430+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0c000/0x0/0x4ffc00000, data 0x1aa8e07/0x1b60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:06.576604+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:07.576723+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:08.576926+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:09.577079+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279871 data_alloc: 218103808 data_used: 7553024
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97e4ad20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba972d70e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:10.577219+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba978acf00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:11.577340+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:12.577537+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:13.577845+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:14.578027+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085932 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:15.578176+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:16.578373+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:17.578639+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:18.579387+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:19.579722+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085932 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:20.580179+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:21.580480+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:22.580668+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:23.581368+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:24.581802+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085932 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:25.582543+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:26.583280+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:27.584245+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:28.584807+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:29.585523+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085932 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:30.585704+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:31.586294+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:32.586805+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:33.587214+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:34.587624+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085932 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:35.587990+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.002712250s of 35.214824677s, submitted: 54
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:36.588276+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97207680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba977690e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba978ac960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97500780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba971c54a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa4e3000/0x0/0x4ffc00000, data 0x10d1e0d/0x1189000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 23273472 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:37.588721+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 23273472 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:38.589224+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 23273472 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:39.589431+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147747 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 23273472 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:40.589784+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 23429120 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba9776da40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:41.590091+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 23429120 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:42.590439+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa4be000/0x0/0x4ffc00000, data 0x10f5e69/0x11ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 20365312 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:43.590712+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 20365312 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:44.590984+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201852 data_alloc: 218103808 data_used: 7708672
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba974caf00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba9776be00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 20365312 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:45.591107+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba9776a5a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:46.591434+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:47.591608+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac3f000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:48.591803+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac3f000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:49.592027+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093962 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:50.592228+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:51.592443+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8880 writes, 34K keys, 8880 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 8880 writes, 2196 syncs, 4.04 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2012 writes, 6756 keys, 2012 commit groups, 1.0 writes per commit group, ingest: 6.16 MB, 0.01 MB/s
                                           Interval WAL: 2012 writes, 856 syncs, 2.35 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:52.592587+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:53.593037+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:54.593350+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac3f000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093962 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:55.593595+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:56.593901+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:57.594092+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:58.594250+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac3f000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:59.594435+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093962 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:00.594576+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:01.594719+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97206780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba94af0780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba974ea000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac3f000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba986d5e00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.948949814s of 25.311471939s, submitted: 68
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba9496cf00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba978aed20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba978af0e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba978afa40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba951485a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 24141824 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:02.594943+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 24141824 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:03.595125+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 24141824 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:04.595288+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140515 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 24141824 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:05.595438+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa77d000/0x0/0x4ffc00000, data 0xe37e46/0xeef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97e4a780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 24141824 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:06.595617+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97e4bc20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba974ebe00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba974cad20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 23830528 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:07.595763+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 23822336 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:08.595900+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 23822336 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:09.596072+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170554 data_alloc: 218103808 data_used: 3780608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa758000/0x0/0x4ffc00000, data 0xe5be56/0xf14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba974cb4a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9776b2c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 23822336 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:10.596298+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba9802f4a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa8f7000/0x0/0x4ffc00000, data 0x998df4/0xa50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:11.596470+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:12.596620+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:13.596777+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:14.596940+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102213 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:15.597113+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:16.597283+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:17.597418+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:18.597584+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:19.597844+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102213 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:20.598054+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:21.598224+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:22.598439+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:23.598739+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:24.599409+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102213 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:25.599775+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:26.600287+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:27.600867+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:28.601403+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:29.601942+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102213 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:30.602246+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:31.602508+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:32.602710+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:33.602873+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:34.603268+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102213 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:35.603541+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 24403968 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:36.603938+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 24403968 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:37.604093+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 24403968 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba97a2a1e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:38.604244+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba9732e960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba9732fa40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9802fc20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.645248413s of 36.975486755s, submitted: 84
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba9732f680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba9496cd20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba97baf680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba978aa000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba974cb2c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 33751040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:39.604568+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188891 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 33751040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:40.604872+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 33751040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:41.605189+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa02a000/0x0/0x4ffc00000, data 0x158adf4/0x1642000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 33751040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:42.605450+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba974ea1e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa02a000/0x0/0x4ffc00000, data 0x158adf4/0x1642000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba978ae1e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 33751040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:43.605680+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba98009860
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba955ef0e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98869248 unmapped: 33423360 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:44.605907+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194850 data_alloc: 218103808 data_used: 200704
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 33415168 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:45.606028+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:46.606272+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 33415168 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:47.606438+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105455616 unmapped: 26836992 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:48.606624+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105455616 unmapped: 26836992 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:49.606808+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105455616 unmapped: 26836992 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280274 data_alloc: 234881024 data_used: 12877824
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:50.607014+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105455616 unmapped: 26836992 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:51.607162+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 26804224 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:52.607303+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 26796032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:53.607472+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 26796032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:54.607677+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 26796032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280274 data_alloc: 234881024 data_used: 12877824
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:55.607860+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 26796032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:56.608108+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 26796032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.110782623s of 18.214258194s, submitted: 17
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:57.608275+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 18964480 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:58.608487+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 19275776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:59.608791+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 19275776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379460 data_alloc: 234881024 data_used: 13168640
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:00.608959+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 19275776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:01.609187+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 19275776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:02.609328+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 19251200 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:03.609643+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 19251200 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:04.609817+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379460 data_alloc: 234881024 data_used: 13168640
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:05.609972+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 19202048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:06.610155+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:07.610287+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:08.610409+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:09.610549+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379460 data_alloc: 234881024 data_used: 13168640
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:10.610716+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97e314a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.625873566s of 13.859022141s, submitted: 76
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97baf0e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:11.610854+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1000 session 0x55ba978af860
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:12.611005+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 26509312 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:13.611131+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 26509312 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:14.611271+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 26484736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115853 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:15.611429+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 27901952 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:16.611628+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 27656192 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:17.611803+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104808448 unmapped: 27484160 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:18.611964+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:19.612128+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115853 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:20.612297+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:21.612439+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:22.612601+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:23.612793+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:24.612945+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115853 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:25.613098+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:26.613281+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:27.613437+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:28.613602+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104824832 unmapped: 27467776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:29.614006+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104824832 unmapped: 27467776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115853 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:30.614176+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104824832 unmapped: 27467776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.093173981s of 20.459842682s, submitted: 278
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba98101a40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97e4a000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97e4a960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:31.614327+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba97e4ab40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba955ee3c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:32.614867+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa53b000/0x0/0x4ffc00000, data 0x107ade4/0x1131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:33.615356+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa53b000/0x0/0x4ffc00000, data 0x107ade4/0x1131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa53b000/0x0/0x4ffc00000, data 0x107ade4/0x1131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:34.615796+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173818 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:35.616037+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa53b000/0x0/0x4ffc00000, data 0x107ade4/0x1131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:36.616287+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:37.616924+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 27598848 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:38.617188+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 27598848 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa53b000/0x0/0x4ffc00000, data 0x107ade4/0x1131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:39.617394+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba971c45a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 27598848 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173818 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba955ee1e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:40.617632+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 27598848 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba974eb4a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927599907s of 10.040019989s, submitted: 21
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba978ad4a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:41.617833+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104857600 unmapped: 27435008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:42.617990+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 27328512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:43.618178+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:44.619006+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa516000/0x0/0x4ffc00000, data 0x109edf2/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225820 data_alloc: 218103808 data_used: 7274496
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:45.619290+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:46.619568+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:47.619766+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa516000/0x0/0x4ffc00000, data 0x109edf2/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:48.619913+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba95547800 session 0x55ba97ea90e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:49.620140+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225820 data_alloc: 218103808 data_used: 7274496
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:50.620328+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:51.620550+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:52.620785+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.263588905s of 11.275200844s, submitted: 2
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111493120 unmapped: 20799488 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9d37000/0x0/0x4ffc00000, data 0x186fdf2/0x1927000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:53.620939+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114728960 unmapped: 17563648 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c9a000/0x0/0x4ffc00000, data 0x191adf2/0x19d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97149c20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97bae3c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba94af0b40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1c00 session 0x55ba978afa40
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:54.621108+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96f98800 session 0x55ba95e410e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97bae000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9776b4a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba97500960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1c00 session 0x55ba97baed20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329233 data_alloc: 218103808 data_used: 7700480
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:55.621307+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:56.621560+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:57.621733+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:58.621932+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a7f000/0x0/0x4ffc00000, data 0x1b33e64/0x1bed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:59.622070+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 20930560 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a5e000/0x0/0x4ffc00000, data 0x1b54e64/0x1c0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328993 data_alloc: 218103808 data_used: 7700480
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:00.622275+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111370240 unmapped: 20922368 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:01.622434+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111370240 unmapped: 20922368 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:02.622607+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97138c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97138c00 session 0x55ba97baf860
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111378432 unmapped: 20914176 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:03.622795+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111394816 unmapped: 20897792 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:04.623088+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111394816 unmapped: 20897792 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.115145683s of 12.528117180s, submitted: 147
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333249 data_alloc: 218103808 data_used: 8241152
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:05.623244+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a5e000/0x0/0x4ffc00000, data 0x1b54e64/0x1c0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 20742144 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:06.623522+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a53000/0x0/0x4ffc00000, data 0x1b5fe64/0x1c19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 20742144 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:07.623659+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 20742144 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:08.623907+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111558656 unmapped: 20733952 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:09.624074+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111558656 unmapped: 20733952 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336441 data_alloc: 218103808 data_used: 8765440
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:10.624303+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 20725760 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a53000/0x0/0x4ffc00000, data 0x1b5fe64/0x1c19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:11.624490+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 20725760 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a53000/0x0/0x4ffc00000, data 0x1b5fe64/0x1c19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:12.624677+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 20725760 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:13.624927+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 20725760 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:14.625079+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 20725760 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.119070053s of 10.132285118s, submitted: 5
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396157 data_alloc: 218103808 data_used: 8814592
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:15.625217+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 18251776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9219000/0x0/0x4ffc00000, data 0x2399e64/0x2453000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:16.625400+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 18210816 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f920c000/0x0/0x4ffc00000, data 0x23a5e64/0x245f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:17.625576+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:18.625785+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f920c000/0x0/0x4ffc00000, data 0x23a5e64/0x245f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:19.625944+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407199 data_alloc: 218103808 data_used: 9080832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:20.626118+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:21.626289+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:22.626438+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f920c000/0x0/0x4ffc00000, data 0x23a5e64/0x245f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:23.626601+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:24.626824+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 18194432 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405991 data_alloc: 218103808 data_used: 9084928
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:25.626944+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x23a5e64/0x245f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 18186240 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:26.627226+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:27.627387+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:28.627559+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x23a5e64/0x245f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:29.627706+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:30.627908+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405991 data_alloc: 218103808 data_used: 9084928
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:31.628058+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.411430359s of 16.657747269s, submitted: 73
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:32.628257+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9207000/0x0/0x4ffc00000, data 0x23abe64/0x2465000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:33.628410+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba974cb680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97500d20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 18169856 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:34.628552+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba9496c3c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 19652608 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:35.628765+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314411 data_alloc: 218103808 data_used: 7700480
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 19652608 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:36.628930+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 19652608 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:37.629098+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 18604032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c62000/0x0/0x4ffc00000, data 0x1952df2/0x1a0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:38.629235+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 18604032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:39.629379+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 18604032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:40.629565+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314511 data_alloc: 218103808 data_used: 7700480
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 18604032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:41.629709+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 18595840 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.349160194s of 10.561897278s, submitted: 33
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba95f170e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1800 session 0x55ba974ebe00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:42.630500+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c62000/0x0/0x4ffc00000, data 0x1952df2/0x1a0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba974eb0e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:43.630645+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:44.630810+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:45.630953+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:46.631191+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:47.631338+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:48.631479+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:49.631723+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:50.631872+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:51.631990+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:52.632165+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:53.632338+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:54.632484+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:55.632634+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:56.632876+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:57.633033+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:58.633194+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:59.633330+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: mgrc ms_handle_reset ms_handle_reset con 0x55ba96fcc000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3082357126
Feb 02 11:46:38 compute-0 ceph-osd[83123]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3082357126,v1:192.168.122.100:6801/3082357126]
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: get_auth_request con 0x55ba96f98800 auth_method 0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: mgrc handle_mgr_configure stats_period=5
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:00.633503+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97138000 session 0x55ba94bd3680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba95547000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260400 session 0x55ba974eaf00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97138000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:01.633659+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:02.633816+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:03.633977+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:04.634159+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:05.634331+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:06.634525+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26299 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:07.634649+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:08.634986+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:09.635197+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:10.635483+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:11.635664+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.530868530s of 29.659109116s, submitted: 37
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 21291008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba97501c20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba97d42780
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1c00 session 0x55ba9496dc20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97138400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97138400 session 0x55ba974ebc20
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97138400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97138400 session 0x55ba97e4a3c0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:12.635936+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e85000/0x0/0x4ffc00000, data 0x131fe46/0x13d7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 28352512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:13.636142+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 28352512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:14.636408+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 28352512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:15.636632+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217867 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 28352512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba962074a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:16.636907+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba9496cf00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 28352512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:17.637123+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba95149680
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1c00
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1c00 session 0x55ba981014a0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97138400
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 28672000 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e61000/0x0/0x4ffc00000, data 0x1343e46/0x13fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:18.637378+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 27271168 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:19.637618+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e61000/0x0/0x4ffc00000, data 0x1343e46/0x13fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:20.637867+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285791 data_alloc: 234881024 data_used: 10125312
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:21.638053+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:22.638309+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e61000/0x0/0x4ffc00000, data 0x1343e46/0x13fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:23.638602+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:24.638802+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:25.638978+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285791 data_alloc: 234881024 data_used: 10125312
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:26.639246+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:27.639416+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e61000/0x0/0x4ffc00000, data 0x1343e46/0x13fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:28.639585+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.617950439s of 16.726493835s, submitted: 35
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 20938752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:29.639735+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 18317312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:30.640002+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351657 data_alloc: 234881024 data_used: 10366976
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 18317312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:31.640236+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 18317312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:32.640455+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 18317312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:33.640640+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1acee46/0x1b86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 18317312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:34.640862+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 18636800 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:35.641068+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345209 data_alloc: 234881024 data_used: 10371072
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 18636800 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:36.641310+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f96b5000/0x0/0x4ffc00000, data 0x1aefe46/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 18636800 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:37.641445+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f96b5000/0x0/0x4ffc00000, data 0x1aefe46/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f96b5000/0x0/0x4ffc00000, data 0x1aefe46/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 18636800 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:38.641685+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 18628608 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.372352600s of 10.616518974s, submitted: 90
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba978ae1e0
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97138400 session 0x55ba9772c000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:39.641820+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba9732e960
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:40.641990+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:41.642239+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:42.642399+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:43.642551+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:44.642799+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:45.642942+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:46.643142+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:47.643318+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:48.643484+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:49.643716+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:50.643967+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:51.644129+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:52.644312+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:53.644490+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:54.644663+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:55.644843+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:56.645187+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:57.645363+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:58.645559+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:59.645727+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:00.645930+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:01.646207+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:02.646373+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:03.646552+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:04.646976+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:05.647161+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:06.647419+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:07.647656+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:08.647925+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 25395200 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:09.648089+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 25395200 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:10.648285+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 25395200 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:11.648451+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:12.648629+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:13.648849+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:14.648991+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:15.649315+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:16.649481+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:17.649613+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:18.649794+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:19.650047+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:20.650288+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:21.650497+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:22.650659+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:23.650844+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:24.650980+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:25.651174+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:26.651475+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:27.651654+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:28.651902+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:29.652089+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:30.652239+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:31.652413+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:32.652621+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:33.652888+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:34.653131+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:35.653364+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:36.653631+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:37.653848+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:38.654104+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:39.654281+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:40.654456+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:41.654775+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:42.655123+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:43.655489+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:44.655856+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:45.656154+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:46.656472+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:47.656786+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:48.657046+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:49.657384+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:50.657599+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:51.657831+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:52.658003+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:53.658193+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:54.658344+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:55.658503+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:56.658689+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:57.658844+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:58.659002+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:59.659212+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:00.659563+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:01.659689+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:02.659837+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:03.660003+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 25346048 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:04.660234+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 25280512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: do_command 'config diff' '{prefix=config diff}'
Feb 02 11:46:38 compute-0 ceph-osd[83123]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 02 11:46:38 compute-0 ceph-osd[83123]: do_command 'config show' '{prefix=config show}'
Feb 02 11:46:38 compute-0 ceph-osd[83123]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 02 11:46:38 compute-0 ceph-osd[83123]: do_command 'counter dump' '{prefix=counter dump}'
Feb 02 11:46:38 compute-0 ceph-osd[83123]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 02 11:46:38 compute-0 ceph-osd[83123]: do_command 'counter schema' '{prefix=counter schema}'
Feb 02 11:46:38 compute-0 ceph-osd[83123]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:05.660398+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 25509888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:46:38 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:46:38 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:06.660568+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 25608192 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:46:38 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:07.660716+0000)
Feb 02 11:46:38 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110518272 unmapped: 25452544 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:46:38 compute-0 ceph-osd[83123]: do_command 'log dump' '{prefix=log dump}'
Feb 02 11:46:38 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26705 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 02 11:46:38 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345779522' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:46:38 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3373286407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:46:38 compute-0 nova_compute[251290]: 2026-02-02 11:46:38.734 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:46:38 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16809 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26744 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:38 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26320 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:38 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:46:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:38.942Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:46:39 compute-0 nova_compute[251290]: 2026-02-02 11:46:39.012 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:46:39 compute-0 nova_compute[251290]: 2026-02-02 11:46:39.013 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4299MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:46:39 compute-0 nova_compute[251290]: 2026-02-02 11:46:39.014 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:46:39 compute-0 nova_compute[251290]: 2026-02-02 11:46:39.014 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:46:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 11:46:39 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281489479' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: pgmap v1051: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.16761 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.26278 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.26684 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1858546630' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/777504601' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.16779 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.26299 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.26705 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3345779522' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2387722807' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/899835992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3373286407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2028382456' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2302987830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26771 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26344 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16827 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 02 11:46:39 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/730688788' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26786 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:39 compute-0 nova_compute[251290]: 2026-02-02 11:46:39.724 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:46:39 compute-0 nova_compute[251290]: 2026-02-02 11:46:39.724 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:46:39 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26356 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:39 compute-0 nova_compute[251290]: 2026-02-02 11:46:39.772 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:46:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:39.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:39 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16842 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:46:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:40.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:46:40 compute-0 crontab[275701]: (root) LIST (root)
Feb 02 11:46:40 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26795 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 02 11:46:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1073222205' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26389 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: from='client.16809 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: from='client.26744 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: from='client.26320 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4281489479' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3916915685' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: from='client.26771 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: from='client.26344 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: from='client.16827 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/730688788' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/368641778' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:46:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157720274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16866 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:40 compute-0 nova_compute[251290]: 2026-02-02 11:46:40.333 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:46:40 compute-0 nova_compute[251290]: 2026-02-02 11:46:40.344 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:46:40 compute-0 nova_compute[251290]: 2026-02-02 11:46:40.472 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:46:40 compute-0 nova_compute[251290]: 2026-02-02 11:46:40.474 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:46:40 compute-0 nova_compute[251290]: 2026-02-02 11:46:40.474 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:46:40 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16887 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:40 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26410 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:40 compute-0 nova_compute[251290]: 2026-02-02 11:46:40.931 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.26786 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.26356 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: pgmap v1052: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.16842 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.26795 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3851460082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2283175838' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1519261556' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1073222205' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.26389 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3157720274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.16866 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1499831893' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/579130795' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/950447203' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4114723002' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3351620427' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/355191000' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16914 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 02 11:46:41 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431626912' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb 02 11:46:41 compute-0 nova_compute[251290]: 2026-02-02 11:46:41.474 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:46:41 compute-0 nova_compute[251290]: 2026-02-02 11:46:41.474 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:46:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb 02 11:46:41 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301142244' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Feb 02 11:46:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:41 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16932 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:41.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:42 compute-0 nova_compute[251290]: 2026-02-02 11:46:42.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:46:42 compute-0 nova_compute[251290]: 2026-02-02 11:46:42.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:46:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:42.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Feb 02 11:46:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1904271546' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.16956 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.16887 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.26410 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.16914 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1816453087' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3431626912' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/448571638' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3324301533' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1453616777' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2310180105' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3301142244' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3597650972' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2447572822' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1961764378' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1904271546' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2730451514' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Feb 02 11:46:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3485973373' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Feb 02 11:46:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb 02 11:46:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825351177' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb 02 11:46:42 compute-0 nova_compute[251290]: 2026-02-02 11:46:42.950 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:43 compute-0 nova_compute[251290]: 2026-02-02 11:46:43.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:46:43 compute-0 nova_compute[251290]: 2026-02-02 11:46:43.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:46:43 compute-0 nova_compute[251290]: 2026-02-02 11:46:43.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:46:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Feb 02 11:46:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062707582' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: pgmap v1053: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.16932 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.16956 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4125833886' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/839770461' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3485973373' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3817176606' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1148962115' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1825351177' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4281196641' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2860900623' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2062707582' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2311803011' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4155271070' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/200659' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:46:43 compute-0 systemd[1]: Starting Hostname Service...
Feb 02 11:46:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Feb 02 11:46:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719606444' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Feb 02 11:46:43 compute-0 systemd[1]: Started Hostname Service.
Feb 02 11:46:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:43.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Feb 02 11:46:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991775537' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26936 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb 02 11:46:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1118479873' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb 02 11:46:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:44.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:44 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26945 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26557 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26957 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:46:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357839219' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:46:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357839219' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 02 11:46:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552451184' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1030399339' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1421591959' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/719606444' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2739383542' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3884040195' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1365835005' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3991775537' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1118479873' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/396765594' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26972 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:46:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26590 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26584 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Feb 02 11:46:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2484655613' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Feb 02 11:46:44 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26996 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26608 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17076 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27008 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: pgmap v1054: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.26936 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.26945 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.26557 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.26957 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3357839219' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3357839219' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3552451184' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/850703993' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.26972 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.26590 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2484655613' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3139252415' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/624201933' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Feb 02 11:46:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3112986847' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26626 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:45 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27035 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:45.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:45 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17088 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:45 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26641 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:45 compute-0 nova_compute[251290]: 2026-02-02 11:46:45.933 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Feb 02 11:46:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357848650' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:46 compute-0 nova_compute[251290]: 2026-02-02 11:46:46.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:46:46 compute-0 nova_compute[251290]: 2026-02-02 11:46:46.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:46:46 compute-0 nova_compute[251290]: 2026-02-02 11:46:46.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:46:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:46:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:46.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:46:46 compute-0 nova_compute[251290]: 2026-02-02 11:46:46.043 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:46:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27053 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26662 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17118 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.26584 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.26996 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.26608 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.17076 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.27008 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3112986847' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.26626 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1678731926' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4048743050' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2357848650' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/783506989' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2004022304' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27080 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26677 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17139 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Feb 02 11:46:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2488413817' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:46] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:46:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:46] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:46:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:47.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:46:47 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26695 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:47 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17157 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Feb 02 11:46:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2889419605' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: pgmap v1055: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.27035 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.17088 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.26641 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.27053 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.26662 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.17118 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.27080 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2520208097' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1172512930' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2488413817' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2721896023' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2889419605' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17199 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:47.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:47 compute-0 sudo[276746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:46:47 compute-0 sudo[276746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:46:47 compute-0 sudo[276746]: pam_unix(sudo:session): session closed for user root
Feb 02 11:46:47 compute-0 nova_compute[251290]: 2026-02-02 11:46:47.954 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Feb 02 11:46:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981452315' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:46:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:48.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:48 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17223 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Feb 02 11:46:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344205706' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27185 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='client.26677 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='client.17139 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='client.26695 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='client.17157 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1981452315' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2626366756' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2601848170' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1344205706' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:48 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17235 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:48 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26788 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:48.943Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:46:49 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17271 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='client.17199 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: pgmap v1056: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='client.17223 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='client.27185 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='client.17235 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/383320549' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/266254373' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/824349126' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:46:49 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:46:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:49.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Feb 02 11:46:49 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086381638' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb 02 11:46:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:50.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:50 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17316 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:50 compute-0 ceph-mon[74676]: from='client.26788 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:50 compute-0 ceph-mon[74676]: from='client.17271 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2695882778' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Feb 02 11:46:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2711962948' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Feb 02 11:46:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3086381638' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb 02 11:46:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3288224573' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Feb 02 11:46:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4276263130' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Feb 02 11:46:50 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27254 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Feb 02 11:46:50 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793425364' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Feb 02 11:46:50 compute-0 nova_compute[251290]: 2026-02-02 11:46:50.935 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:51 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26854 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Feb 02 11:46:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061241310' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Feb 02 11:46:51 compute-0 ceph-mon[74676]: pgmap v1057: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:51 compute-0 ceph-mon[74676]: from='client.17316 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1711522012' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Feb 02 11:46:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3793425364' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Feb 02 11:46:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2828928036' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Feb 02 11:46:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4061241310' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Feb 02 11:46:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:46:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:51.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:46:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Feb 02 11:46:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842855443' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Feb 02 11:46:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:52.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27296 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Feb 02 11:46:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004231292' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26893 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17367 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mon[74676]: from='client.27254 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mon[74676]: from='client.26854 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1894950718' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3325504282' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/842855443' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2969434764' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4004231292' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Feb 02 11:46:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2878341386' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Feb 02 11:46:52 compute-0 nova_compute[251290]: 2026-02-02 11:46:52.988 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27317 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Feb 02 11:46:53 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2805457233' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26911 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27329 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:46:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:53.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:46:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26923 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: pgmap v1058: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:53 compute-0 ceph-mon[74676]: from='client.27296 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: from='client.26893 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: from='client.17367 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3155734167' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: from='client.27317 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2805457233' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: from='client.26911 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: from='client.27329 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/91178147' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Feb 02 11:46:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Feb 02 11:46:53 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/64602242' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Feb 02 11:46:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:54.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:54 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17403 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:54 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27359 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:54 compute-0 ceph-mon[74676]: pgmap v1059: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:54 compute-0 ceph-mon[74676]: from='client.26923 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/64602242' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Feb 02 11:46:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/30542321' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Feb 02 11:46:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3942623271' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Feb 02 11:46:54 compute-0 ceph-mon[74676]: from='client.17403 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:54 compute-0 ceph-mon[74676]: from='client.27359 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4030930282' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Feb 02 11:46:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Feb 02 11:46:54 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948202357' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27371 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26953 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27374 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26962 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:55.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:55 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17430 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/948202357' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mon[74676]: from='client.27371 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mon[74676]: from='client.26953 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mon[74676]: from='client.27374 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1928055034' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mon[74676]: from='client.26962 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/701215613' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Feb 02 11:46:55 compute-0 nova_compute[251290]: 2026-02-02 11:46:55.937 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:46:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:46:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:46:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:46:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:46:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:56.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:56 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27395 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Feb 02 11:46:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3468064873' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Feb 02 11:46:56 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27404 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Feb 02 11:46:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041470198' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Feb 02 11:46:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:46:56 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.26995 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:56 compute-0 ovs-appctl[278532]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb 02 11:46:56 compute-0 ovs-appctl[278552]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb 02 11:46:56 compute-0 ovs-appctl[278557]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb 02 11:46:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:56] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:46:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:46:56] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:46:57 compute-0 ceph-mon[74676]: pgmap v1060: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:46:57 compute-0 ceph-mon[74676]: from='client.17430 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/465778144' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Feb 02 11:46:57 compute-0 ceph-mon[74676]: from='client.27395 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3468064873' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Feb 02 11:46:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3867553789' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Feb 02 11:46:57 compute-0 ceph-mon[74676]: from='client.27404 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2041470198' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17460 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:57.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27010 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:57 compute-0 podman[278666]: 2026-02-02 11:46:57.287333781 +0000 UTC m=+0.067243879 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 02 11:46:57 compute-0 podman[278670]: 2026-02-02 11:46:57.315813328 +0000 UTC m=+0.095376656 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27422 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:46:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:46:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:57.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:46:57 compute-0 nova_compute[251290]: 2026-02-02 11:46:57.991 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:46:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Feb 02 11:46:58 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2813911476' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Feb 02 11:46:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:46:58.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:46:58 compute-0 ceph-mon[74676]: from='client.26995 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:58 compute-0 ceph-mon[74676]: from='client.17460 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/740789564' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:46:58 compute-0 ceph-mon[74676]: from='client.27010 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:58 compute-0 ceph-mon[74676]: from='client.27422 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/284304362' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Feb 02 11:46:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/517611566' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:46:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2813911476' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Feb 02 11:46:58 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Feb 02 11:46:58 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/47503527' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Feb 02 11:46:58 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27452 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:58 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17493 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:46:58.944Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:46:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27040 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:59 compute-0 ceph-mon[74676]: pgmap v1061: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4203277388' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Feb 02 11:46:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/932635323' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Feb 02 11:46:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/47503527' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Feb 02 11:46:59 compute-0 ceph-mon[74676]: from='client.27452 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:46:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3781724507' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Feb 02 11:46:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1428217072' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:46:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17499 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:46:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:46:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:46:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:46:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:46:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:46:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:46:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:46:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:46:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:46:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Feb 02 11:46:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905206878' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:46:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:46:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:46:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:46:59.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:00.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:00 compute-0 ceph-mon[74676]: from='client.17493 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mon[74676]: from='client.27040 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mon[74676]: from='client.17499 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2396867300' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2713533640' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/905206878' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2254069485' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1759801894' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Feb 02 11:47:00 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816631556' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27503 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Feb 02 11:47:00 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2291710493' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:00 compute-0 nova_compute[251290]: 2026-02-02 11:47:00.943 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:01 compute-0 ceph-mon[74676]: pgmap v1062: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:01 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1816631556' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Feb 02 11:47:01 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/233364117' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Feb 02 11:47:01 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2607296360' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:01 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2291710493' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:01 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/811207137' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Feb 02 11:47:01 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17535 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:01 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27085 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Feb 02 11:47:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/269780707' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:47:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:01.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:02.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:02 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27530 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 ceph-mon[74676]: from='client.27503 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 ceph-mon[74676]: from='client.17535 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3232861156' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 ceph-mon[74676]: from='client.27085 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/269780707' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1466011141' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1599215768' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Feb 02 11:47:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62831373' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Feb 02 11:47:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/791006958' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27109 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:02 compute-0 nova_compute[251290]: 2026-02-02 11:47:02.994 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:03 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27554 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Feb 02 11:47:03 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3549232008' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mon[74676]: pgmap v1063: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:03 compute-0 ceph-mon[74676]: from='client.27530 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/62831373' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2544501863' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/715872125' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/791006958' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3549232008' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27569 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17586 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27130 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:03.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:04.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27136 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Feb 02 11:47:04 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300010185' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: from='client.27109 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: from='client.27554 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/527360060' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: from='client.27569 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: from='client.17586 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: from='client.27130 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2852578981' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1300010185' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2579226137' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27599 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Feb 02 11:47:04 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2119850482' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27611 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:04 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17616 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:05 compute-0 ceph-mon[74676]: pgmap v1064: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:05 compute-0 ceph-mon[74676]: from='client.27136 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:05 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3759834338' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:05 compute-0 ceph-mon[74676]: from='client.27599 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:05 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2119850482' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:05 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3063942330' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27163 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:05 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Feb 02 11:47:05 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/689290063' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27175 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:05.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:05 compute-0 nova_compute[251290]: 2026-02-02 11:47:05.943 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:05 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27178 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:06.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:06 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27644 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-mon[74676]: from='client.27611 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-mon[74676]: from='client.17616 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2866343228' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-mon[74676]: from='client.27163 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/689290063' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1929808384' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/460846894' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17649 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27659 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 sudo[280216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:47:06 compute-0 sudo[280216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:06 compute-0 sudo[280216]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:06 compute-0 sudo[280241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:47:06 compute-0 sudo[280241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Feb 02 11:47:06 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747031640' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:06 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27202 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:47:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:47:07 compute-0 sudo[280241]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:07.206Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:47:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:07.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:47:07 compute-0 ceph-mon[74676]: from='client.27175 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mon[74676]: pgmap v1065: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:07 compute-0 ceph-mon[74676]: from='client.27178 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mon[74676]: from='client.27644 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mon[74676]: from='client.17649 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mon[74676]: from='client.27659 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3085836540' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2747031640' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1091484831' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Feb 02 11:47:07 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688241722' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27208 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17673 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:07.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:07 compute-0 sudo[280408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:47:07 compute-0 sudo[280408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:07 compute-0 sudo[280408]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:08 compute-0 nova_compute[251290]: 2026-02-02 11:47:08.036 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:08.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17685 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:08 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:47:08 compute-0 ceph-mon[74676]: from='client.27202 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:08 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2688241722' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:08 compute-0 ceph-mon[74676]: from='client.27208 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:08 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2773640866' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:08 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3114684281' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:47:08 compute-0 systemd[1]: Starting Time & Date Service...
Feb 02 11:47:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:47:08 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:08 compute-0 systemd[1]: Started Time & Date Service.
Feb 02 11:47:08 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Feb 02 11:47:08 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1475333489' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:47:08 compute-0 virtqemud[251949]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 02 11:47:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:08.947Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:47:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Feb 02 11:47:09 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914523453' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:47:09 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:47:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:47:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:47:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:47:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:47:09 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:47:09 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:47:09 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:47:09 compute-0 sudo[280763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:47:09 compute-0 sudo[280763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:09 compute-0 sudo[280763]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:09 compute-0 sudo[280803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:47:09 compute-0 sudo[280803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='client.17673 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: pgmap v1066: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='client.17685 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2907899433' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1475333489' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3914523453' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:47:09 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17703 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:09 compute-0 podman[280942]: 2026-02-02 11:47:09.717590098 +0000 UTC m=+0.052404194 container create 82e5db8086759ef8693a05a49c87449ce01bd56e82c388957a0edd30cbef0318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_yalow, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:47:09 compute-0 systemd[1]: Started libpod-conmon-82e5db8086759ef8693a05a49c87449ce01bd56e82c388957a0edd30cbef0318.scope.
Feb 02 11:47:09 compute-0 podman[280942]: 2026-02-02 11:47:09.695960538 +0000 UTC m=+0.030774654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:47:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:47:09 compute-0 podman[280942]: 2026-02-02 11:47:09.819600383 +0000 UTC m=+0.154414509 container init 82e5db8086759ef8693a05a49c87449ce01bd56e82c388957a0edd30cbef0318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_yalow, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:47:09 compute-0 podman[280942]: 2026-02-02 11:47:09.827255842 +0000 UTC m=+0.162069938 container start 82e5db8086759ef8693a05a49c87449ce01bd56e82c388957a0edd30cbef0318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_yalow, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:47:09 compute-0 podman[280942]: 2026-02-02 11:47:09.832324678 +0000 UTC m=+0.167138794 container attach 82e5db8086759ef8693a05a49c87449ce01bd56e82c388957a0edd30cbef0318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:47:09 compute-0 modest_yalow[280984]: 167 167
Feb 02 11:47:09 compute-0 systemd[1]: libpod-82e5db8086759ef8693a05a49c87449ce01bd56e82c388957a0edd30cbef0318.scope: Deactivated successfully.
Feb 02 11:47:09 compute-0 podman[280942]: 2026-02-02 11:47:09.837053633 +0000 UTC m=+0.171867749 container died 82e5db8086759ef8693a05a49c87449ce01bd56e82c388957a0edd30cbef0318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:47:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:09.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4075b066c0a70e520cab7ae04a332c751eb407d3b679772ebf0c0069d25ebea9-merged.mount: Deactivated successfully.
Feb 02 11:47:09 compute-0 podman[280942]: 2026-02-02 11:47:09.884270367 +0000 UTC m=+0.219084463 container remove 82e5db8086759ef8693a05a49c87449ce01bd56e82c388957a0edd30cbef0318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:47:09 compute-0 systemd[1]: libpod-conmon-82e5db8086759ef8693a05a49c87449ce01bd56e82c388957a0edd30cbef0318.scope: Deactivated successfully.
Feb 02 11:47:09 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.17709 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:10.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:10 compute-0 podman[281024]: 2026-02-02 11:47:10.073851183 +0000 UTC m=+0.075692002 container create 9d85a4b7233199eceb2032076a5663953c06c4953216bd9c89fd3839eb92c0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:47:10 compute-0 podman[281024]: 2026-02-02 11:47:10.026851885 +0000 UTC m=+0.028692744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:47:10 compute-0 systemd[1]: Started libpod-conmon-9d85a4b7233199eceb2032076a5663953c06c4953216bd9c89fd3839eb92c0b7.scope.
Feb 02 11:47:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad2d8b3302ecc787d540bc0c628e0551f2f3b1cea4adf22e4778acec6e454a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad2d8b3302ecc787d540bc0c628e0551f2f3b1cea4adf22e4778acec6e454a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad2d8b3302ecc787d540bc0c628e0551f2f3b1cea4adf22e4778acec6e454a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad2d8b3302ecc787d540bc0c628e0551f2f3b1cea4adf22e4778acec6e454a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad2d8b3302ecc787d540bc0c628e0551f2f3b1cea4adf22e4778acec6e454a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:10 compute-0 podman[281024]: 2026-02-02 11:47:10.190338803 +0000 UTC m=+0.192179652 container init 9d85a4b7233199eceb2032076a5663953c06c4953216bd9c89fd3839eb92c0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:47:10 compute-0 podman[281024]: 2026-02-02 11:47:10.19826168 +0000 UTC m=+0.200102499 container start 9d85a4b7233199eceb2032076a5663953c06c4953216bd9c89fd3839eb92c0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:47:10 compute-0 podman[281024]: 2026-02-02 11:47:10.20523713 +0000 UTC m=+0.207077969 container attach 9d85a4b7233199eceb2032076a5663953c06c4953216bd9c89fd3839eb92c0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:47:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 02 11:47:10 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797494858' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:10 compute-0 lucid_mcclintock[281073]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:47:10 compute-0 lucid_mcclintock[281073]: --> All data devices are unavailable
Feb 02 11:47:10 compute-0 ceph-mon[74676]: pgmap v1067: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:47:10 compute-0 ceph-mon[74676]: from='client.17703 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:10 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1797494858' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:10 compute-0 systemd[1]: libpod-9d85a4b7233199eceb2032076a5663953c06c4953216bd9c89fd3839eb92c0b7.scope: Deactivated successfully.
Feb 02 11:47:10 compute-0 podman[281024]: 2026-02-02 11:47:10.556929183 +0000 UTC m=+0.558770002 container died 9d85a4b7233199eceb2032076a5663953c06c4953216bd9c89fd3839eb92c0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:47:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ad2d8b3302ecc787d540bc0c628e0551f2f3b1cea4adf22e4778acec6e454a3-merged.mount: Deactivated successfully.
Feb 02 11:47:10 compute-0 podman[281024]: 2026-02-02 11:47:10.636476244 +0000 UTC m=+0.638317063 container remove 9d85a4b7233199eceb2032076a5663953c06c4953216bd9c89fd3839eb92c0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:47:10 compute-0 systemd[1]: libpod-conmon-9d85a4b7233199eceb2032076a5663953c06c4953216bd9c89fd3839eb92c0b7.scope: Deactivated successfully.
Feb 02 11:47:10 compute-0 sudo[280803]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:10 compute-0 sudo[281155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:47:10 compute-0 sudo[281155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:10 compute-0 sudo[281155]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:10 compute-0 sudo[281180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:47:10 compute-0 sudo[281180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:10 compute-0 nova_compute[251290]: 2026-02-02 11:47:10.945 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:10 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Feb 02 11:47:10 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640244745' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:47:11 compute-0 podman[281248]: 2026-02-02 11:47:11.235059977 +0000 UTC m=+0.058531959 container create 807d350b0336cc68c36342c57dc1579ddb7b65fd69d147a436bbcfbf6bd2e3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb 02 11:47:11 compute-0 systemd[1]: Started libpod-conmon-807d350b0336cc68c36342c57dc1579ddb7b65fd69d147a436bbcfbf6bd2e3be.scope.
Feb 02 11:47:11 compute-0 podman[281248]: 2026-02-02 11:47:11.207542428 +0000 UTC m=+0.031014420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:47:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:47:11 compute-0 podman[281248]: 2026-02-02 11:47:11.353987847 +0000 UTC m=+0.177459859 container init 807d350b0336cc68c36342c57dc1579ddb7b65fd69d147a436bbcfbf6bd2e3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:47:11 compute-0 podman[281248]: 2026-02-02 11:47:11.361162613 +0000 UTC m=+0.184634615 container start 807d350b0336cc68c36342c57dc1579ddb7b65fd69d147a436bbcfbf6bd2e3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb 02 11:47:11 compute-0 jovial_nobel[281265]: 167 167
Feb 02 11:47:11 compute-0 systemd[1]: libpod-807d350b0336cc68c36342c57dc1579ddb7b65fd69d147a436bbcfbf6bd2e3be.scope: Deactivated successfully.
Feb 02 11:47:11 compute-0 podman[281248]: 2026-02-02 11:47:11.371476208 +0000 UTC m=+0.194948310 container attach 807d350b0336cc68c36342c57dc1579ddb7b65fd69d147a436bbcfbf6bd2e3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:47:11 compute-0 podman[281248]: 2026-02-02 11:47:11.371927311 +0000 UTC m=+0.195399303 container died 807d350b0336cc68c36342c57dc1579ddb7b65fd69d147a436bbcfbf6bd2e3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b100b10a9880776af1dee5f80a086d0b24cac5dadf6fa97fc3668d360e13524d-merged.mount: Deactivated successfully.
Feb 02 11:47:11 compute-0 podman[281248]: 2026-02-02 11:47:11.432369854 +0000 UTC m=+0.255841846 container remove 807d350b0336cc68c36342c57dc1579ddb7b65fd69d147a436bbcfbf6bd2e3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:47:11 compute-0 systemd[1]: libpod-conmon-807d350b0336cc68c36342c57dc1579ddb7b65fd69d147a436bbcfbf6bd2e3be.scope: Deactivated successfully.
Feb 02 11:47:11 compute-0 ceph-mon[74676]: from='client.17709 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:47:11 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/640244745' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Feb 02 11:47:11 compute-0 podman[281289]: 2026-02-02 11:47:11.566945283 +0000 UTC m=+0.044829237 container create d71a6a1e53d7b20fc33b67ae19d9c75081c9a33c837433c48027057b47af8eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:47:11 compute-0 systemd[1]: Started libpod-conmon-d71a6a1e53d7b20fc33b67ae19d9c75081c9a33c837433c48027057b47af8eb9.scope.
Feb 02 11:47:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:47:11 compute-0 podman[281289]: 2026-02-02 11:47:11.548033211 +0000 UTC m=+0.025917185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b3d9c1cb0c7f9125768b416631173e8ebfe705fc989e63704793d627e4a7916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b3d9c1cb0c7f9125768b416631173e8ebfe705fc989e63704793d627e4a7916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b3d9c1cb0c7f9125768b416631173e8ebfe705fc989e63704793d627e4a7916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b3d9c1cb0c7f9125768b416631173e8ebfe705fc989e63704793d627e4a7916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:11 compute-0 podman[281289]: 2026-02-02 11:47:11.667486026 +0000 UTC m=+0.145369990 container init d71a6a1e53d7b20fc33b67ae19d9c75081c9a33c837433c48027057b47af8eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_golick, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:47:11 compute-0 podman[281289]: 2026-02-02 11:47:11.674913398 +0000 UTC m=+0.152797352 container start d71a6a1e53d7b20fc33b67ae19d9c75081c9a33c837433c48027057b47af8eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 11:47:11 compute-0 podman[281289]: 2026-02-02 11:47:11.682144626 +0000 UTC m=+0.160028580 container attach d71a6a1e53d7b20fc33b67ae19d9c75081c9a33c837433c48027057b47af8eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:47:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:11.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:11 compute-0 sharp_golick[281306]: {
Feb 02 11:47:11 compute-0 sharp_golick[281306]:     "1": [
Feb 02 11:47:11 compute-0 sharp_golick[281306]:         {
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "devices": [
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "/dev/loop3"
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             ],
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "lv_name": "ceph_lv0",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "lv_size": "21470642176",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "name": "ceph_lv0",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "tags": {
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.cluster_name": "ceph",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.crush_device_class": "",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.encrypted": "0",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.osd_id": "1",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.type": "block",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.vdo": "0",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:                 "ceph.with_tpm": "0"
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             },
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "type": "block",
Feb 02 11:47:11 compute-0 sharp_golick[281306]:             "vg_name": "ceph_vg0"
Feb 02 11:47:11 compute-0 sharp_golick[281306]:         }
Feb 02 11:47:11 compute-0 sharp_golick[281306]:     ]
Feb 02 11:47:11 compute-0 sharp_golick[281306]: }
Feb 02 11:47:11 compute-0 systemd[1]: libpod-d71a6a1e53d7b20fc33b67ae19d9c75081c9a33c837433c48027057b47af8eb9.scope: Deactivated successfully.
Feb 02 11:47:11 compute-0 podman[281289]: 2026-02-02 11:47:11.972883292 +0000 UTC m=+0.450767276 container died d71a6a1e53d7b20fc33b67ae19d9c75081c9a33c837433c48027057b47af8eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_golick, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b3d9c1cb0c7f9125768b416631173e8ebfe705fc989e63704793d627e4a7916-merged.mount: Deactivated successfully.
Feb 02 11:47:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:12.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:12 compute-0 podman[281289]: 2026-02-02 11:47:12.068978537 +0000 UTC m=+0.546862491 container remove d71a6a1e53d7b20fc33b67ae19d9c75081c9a33c837433c48027057b47af8eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:47:12 compute-0 systemd[1]: libpod-conmon-d71a6a1e53d7b20fc33b67ae19d9c75081c9a33c837433c48027057b47af8eb9.scope: Deactivated successfully.
Feb 02 11:47:12 compute-0 sudo[281180]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:12 compute-0 sudo[281331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:47:12 compute-0 sudo[281331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:12 compute-0 sudo[281331]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:12 compute-0 sudo[281356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:47:12 compute-0 sudo[281356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:12 compute-0 podman[281421]: 2026-02-02 11:47:12.628481548 +0000 UTC m=+0.024030690 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:47:12 compute-0 ceph-mon[74676]: pgmap v1068: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:47:12 compute-0 podman[281421]: 2026-02-02 11:47:12.847873989 +0000 UTC m=+0.243423111 container create f47701c8b0071e094f4f88d62e2579cc4da0ce3db970429118badd65a6a693d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:47:12 compute-0 systemd[1]: Started libpod-conmon-f47701c8b0071e094f4f88d62e2579cc4da0ce3db970429118badd65a6a693d3.scope.
Feb 02 11:47:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:47:12 compute-0 podman[281421]: 2026-02-02 11:47:12.985956458 +0000 UTC m=+0.381505640 container init f47701c8b0071e094f4f88d62e2579cc4da0ce3db970429118badd65a6a693d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 11:47:12 compute-0 podman[281421]: 2026-02-02 11:47:12.993103713 +0000 UTC m=+0.388652845 container start f47701c8b0071e094f4f88d62e2579cc4da0ce3db970429118badd65a6a693d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:47:12 compute-0 infallible_shannon[281438]: 167 167
Feb 02 11:47:12 compute-0 systemd[1]: libpod-f47701c8b0071e094f4f88d62e2579cc4da0ce3db970429118badd65a6a693d3.scope: Deactivated successfully.
Feb 02 11:47:13 compute-0 podman[281421]: 2026-02-02 11:47:13.023954197 +0000 UTC m=+0.419503339 container attach f47701c8b0071e094f4f88d62e2579cc4da0ce3db970429118badd65a6a693d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:47:13 compute-0 podman[281421]: 2026-02-02 11:47:13.025130771 +0000 UTC m=+0.420679913 container died f47701c8b0071e094f4f88d62e2579cc4da0ce3db970429118badd65a6a693d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:47:13 compute-0 nova_compute[251290]: 2026-02-02 11:47:13.068 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-af6d05fa5be098ec4f0e11da8422fd00e7218d0956127f12d8ee3a258a7da820-merged.mount: Deactivated successfully.
Feb 02 11:47:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:47:13 compute-0 podman[281421]: 2026-02-02 11:47:13.118270722 +0000 UTC m=+0.513819844 container remove f47701c8b0071e094f4f88d62e2579cc4da0ce3db970429118badd65a6a693d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:47:13 compute-0 systemd[1]: libpod-conmon-f47701c8b0071e094f4f88d62e2579cc4da0ce3db970429118badd65a6a693d3.scope: Deactivated successfully.
Feb 02 11:47:13 compute-0 podman[281464]: 2026-02-02 11:47:13.272432482 +0000 UTC m=+0.048536213 container create 6fa8ca8ebe05dc012c62533cbc688e35eeb999abe3af44319cb5a8ed2fcf1cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:47:13 compute-0 systemd[1]: Started libpod-conmon-6fa8ca8ebe05dc012c62533cbc688e35eeb999abe3af44319cb5a8ed2fcf1cad.scope.
Feb 02 11:47:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5139498daee7d63a0ed9ad75dc281abaf1d4ecdcc3b59640fe53c2790f118a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5139498daee7d63a0ed9ad75dc281abaf1d4ecdcc3b59640fe53c2790f118a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5139498daee7d63a0ed9ad75dc281abaf1d4ecdcc3b59640fe53c2790f118a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5139498daee7d63a0ed9ad75dc281abaf1d4ecdcc3b59640fe53c2790f118a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:47:13 compute-0 podman[281464]: 2026-02-02 11:47:13.251828801 +0000 UTC m=+0.027932552 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:47:13 compute-0 podman[281464]: 2026-02-02 11:47:13.351638733 +0000 UTC m=+0.127742484 container init 6fa8ca8ebe05dc012c62533cbc688e35eeb999abe3af44319cb5a8ed2fcf1cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:47:13 compute-0 podman[281464]: 2026-02-02 11:47:13.357635975 +0000 UTC m=+0.133739706 container start 6fa8ca8ebe05dc012c62533cbc688e35eeb999abe3af44319cb5a8ed2fcf1cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:47:13 compute-0 podman[281464]: 2026-02-02 11:47:13.362430162 +0000 UTC m=+0.138533943 container attach 6fa8ca8ebe05dc012c62533cbc688e35eeb999abe3af44319cb5a8ed2fcf1cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:47:13 compute-0 ceph-mon[74676]: pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:47:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:47:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:13.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:47:14 compute-0 lvm[281556]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:47:14 compute-0 lvm[281556]: VG ceph_vg0 finished
Feb 02 11:47:14 compute-0 quirky_thompson[281481]: {}
Feb 02 11:47:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:14.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:14 compute-0 systemd[1]: libpod-6fa8ca8ebe05dc012c62533cbc688e35eeb999abe3af44319cb5a8ed2fcf1cad.scope: Deactivated successfully.
Feb 02 11:47:14 compute-0 systemd[1]: libpod-6fa8ca8ebe05dc012c62533cbc688e35eeb999abe3af44319cb5a8ed2fcf1cad.scope: Consumed 1.100s CPU time.
Feb 02 11:47:14 compute-0 podman[281464]: 2026-02-02 11:47:14.094306477 +0000 UTC m=+0.870410218 container died 6fa8ca8ebe05dc012c62533cbc688e35eeb999abe3af44319cb5a8ed2fcf1cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5139498daee7d63a0ed9ad75dc281abaf1d4ecdcc3b59640fe53c2790f118a9-merged.mount: Deactivated successfully.
Feb 02 11:47:14 compute-0 podman[281464]: 2026-02-02 11:47:14.140504661 +0000 UTC m=+0.916608392 container remove 6fa8ca8ebe05dc012c62533cbc688e35eeb999abe3af44319cb5a8ed2fcf1cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:47:14 compute-0 systemd[1]: libpod-conmon-6fa8ca8ebe05dc012c62533cbc688e35eeb999abe3af44319cb5a8ed2fcf1cad.scope: Deactivated successfully.
Feb 02 11:47:14 compute-0 sudo[281356]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:47:14 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:47:14 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:14 compute-0 sudo[281572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:47:14 compute-0 sudo[281572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:14 compute-0 sudo[281572]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:47:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:47:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:47:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:47:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:47:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:15.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:15 compute-0 nova_compute[251290]: 2026-02-02 11:47:15.947 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:47:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:16.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:47:16 compute-0 ceph-mon[74676]: pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:47:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:47:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:47:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:47:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:17.208Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:47:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:17.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:47:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:17.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:18.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:18 compute-0 nova_compute[251290]: 2026-02-02 11:47:18.117 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:18 compute-0 ceph-mon[74676]: pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:47:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:18.948Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:47:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:47:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:47:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:19.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:47:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:20.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:20 compute-0 ceph-mon[74676]: pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:47:20 compute-0 nova_compute[251290]: 2026-02-02 11:47:20.948 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:21.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:22.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:22 compute-0 ceph-mon[74676]: pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:47:22.686 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:47:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:47:22.687 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:47:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:47:22.687 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:47:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:23 compute-0 nova_compute[251290]: 2026-02-02 11:47:23.146 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:23.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:24.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:24 compute-0 ceph-mon[74676]: pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:25 compute-0 nova_compute[251290]: 2026-02-02 11:47:25.950 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:26.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:26 compute-0 ceph-mon[74676]: pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:26] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:47:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:26] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:47:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:27.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:47:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:27.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:28 compute-0 sudo[281612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:47:28 compute-0 sudo[281612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:28 compute-0 sudo[281612]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:28 compute-0 podman[281636]: 2026-02-02 11:47:28.090671948 +0000 UTC m=+0.066846647 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:47:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:28 compute-0 nova_compute[251290]: 2026-02-02 11:47:28.148 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:28 compute-0 podman[281637]: 2026-02-02 11:47:28.16571175 +0000 UTC m=+0.141665533 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 11:47:28 compute-0 ceph-mon[74676]: pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:28.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:47:29
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', '.nfs', 'volumes', 'default.rgw.control']
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:47:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:47:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:47:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:47:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:47:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:29.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:47:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:30.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:47:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:47:30 compute-0 ceph-mon[74676]: pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:30 compute-0 nova_compute[251290]: 2026-02-02 11:47:30.952 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:31 compute-0 ceph-mon[74676]: pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:31.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:32.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:33 compute-0 nova_compute[251290]: 2026-02-02 11:47:33.151 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:33.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:34 compute-0 ceph-mon[74676]: pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:35.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:35 compute-0 nova_compute[251290]: 2026-02-02 11:47:35.954 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:36.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:36 compute-0 ceph-mon[74676]: pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:36] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:47:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:36] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:47:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:37.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:47:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:37.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:38 compute-0 nova_compute[251290]: 2026-02-02 11:47:38.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:47:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:38.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:38 compute-0 nova_compute[251290]: 2026-02-02 11:47:38.324 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:38 compute-0 ceph-mon[74676]: pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2263374763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:47:38 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3101813195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:47:38 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 02 11:47:38 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 02 11:47:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:38.951Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:47:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:38.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.059 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.060 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.060 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.060 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.060 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:47:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4232108161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:47:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:47:39 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125862682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.542 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.761 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.763 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4315MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.764 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.764 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.864 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.865 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:47:39 compute-0 nova_compute[251290]: 2026-02-02 11:47:39.886 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:47:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:39.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:47:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:40.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:47:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:47:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4217399799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:47:40 compute-0 ceph-mon[74676]: pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1125862682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:47:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4200216164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:47:40 compute-0 nova_compute[251290]: 2026-02-02 11:47:40.378 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:47:40 compute-0 nova_compute[251290]: 2026-02-02 11:47:40.385 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:47:40 compute-0 nova_compute[251290]: 2026-02-02 11:47:40.410 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:47:40 compute-0 nova_compute[251290]: 2026-02-02 11:47:40.412 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:47:40 compute-0 nova_compute[251290]: 2026-02-02 11:47:40.412 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:47:40 compute-0 nova_compute[251290]: 2026-02-02 11:47:40.959 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4217399799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:47:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:41.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:42.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:42 compute-0 nova_compute[251290]: 2026-02-02 11:47:42.412 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:47:42 compute-0 nova_compute[251290]: 2026-02-02 11:47:42.412 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:47:42 compute-0 ceph-mon[74676]: pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:43 compute-0 nova_compute[251290]: 2026-02-02 11:47:43.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:47:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:43 compute-0 nova_compute[251290]: 2026-02-02 11:47:43.327 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:43.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:47:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873797949' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:47:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:47:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873797949' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:47:44 compute-0 nova_compute[251290]: 2026-02-02 11:47:44.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:47:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:44.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:44 compute-0 ceph-mon[74676]: pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/873797949' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:47:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/873797949' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:47:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:47:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:47:45 compute-0 nova_compute[251290]: 2026-02-02 11:47:45.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:47:45 compute-0 nova_compute[251290]: 2026-02-02 11:47:45.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:47:45 compute-0 nova_compute[251290]: 2026-02-02 11:47:45.018 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:47:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:47:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:45.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:45 compute-0 nova_compute[251290]: 2026-02-02 11:47:45.961 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:46.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:46 compute-0 ceph-mon[74676]: pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:46] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:47:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:46] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:47:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:47.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:47:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:47.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:48 compute-0 nova_compute[251290]: 2026-02-02 11:47:48.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:47:48 compute-0 nova_compute[251290]: 2026-02-02 11:47:48.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:47:48 compute-0 nova_compute[251290]: 2026-02-02 11:47:48.021 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:47:48 compute-0 nova_compute[251290]: 2026-02-02 11:47:48.049 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:47:48 compute-0 sudo[281746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:47:48 compute-0 sudo[281746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:47:48 compute-0 sudo[281746]: pam_unix(sudo:session): session closed for user root
Feb 02 11:47:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:48.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:48 compute-0 nova_compute[251290]: 2026-02-02 11:47:48.331 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:48 compute-0 ceph-mon[74676]: pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:48.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:47:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:49.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:50.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:50 compute-0 ceph-mon[74676]: pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:50 compute-0 nova_compute[251290]: 2026-02-02 11:47:50.965 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:51.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:52 compute-0 nova_compute[251290]: 2026-02-02 11:47:52.043 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:47:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:52.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:52 compute-0 ceph-mon[74676]: pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:53 compute-0 nova_compute[251290]: 2026-02-02 11:47:53.333 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:53.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:54.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:54 compute-0 ceph-mon[74676]: pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:55.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:55 compute-0 nova_compute[251290]: 2026-02-02 11:47:55.970 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:47:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:47:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:47:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:47:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:47:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:56.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:56 compute-0 ceph-mon[74676]: pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:47:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:56] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:47:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:47:56] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb 02 11:47:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:57.214Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:47:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:57.214Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:47:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:57.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:47:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:47:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:47:58.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:47:58 compute-0 podman[281781]: 2026-02-02 11:47:58.269718286 +0000 UTC m=+0.052356522 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb 02 11:47:58 compute-0 podman[281782]: 2026-02-02 11:47:58.305844672 +0000 UTC m=+0.088885880 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 02 11:47:58 compute-0 nova_compute[251290]: 2026-02-02 11:47:58.336 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:47:58 compute-0 ceph-mon[74676]: pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:47:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:58.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:47:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:47:58.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:47:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:47:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:47:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:47:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:47:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:47:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:47:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:47:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:47:59 compute-0 ceph-mon[74676]: pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:47:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:47:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:47:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:47:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:47:59.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:00.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:00 compute-0 nova_compute[251290]: 2026-02-02 11:48:00.972 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:48:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:01.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:48:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:02.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:02 compute-0 ceph-mon[74676]: pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:03 compute-0 nova_compute[251290]: 2026-02-02 11:48:03.338 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:03.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:04.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:04 compute-0 ceph-mon[74676]: pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:05.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:05 compute-0 nova_compute[251290]: 2026-02-02 11:48:05.974 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:48:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:06.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:48:06 compute-0 ceph-mon[74676]: pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:06] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:48:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:06] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:48:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:07.215Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:48:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:07.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:48:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:08.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:48:08 compute-0 sudo[281834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:48:08 compute-0 sudo[281834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:08 compute-0 sudo[281834]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:08 compute-0 nova_compute[251290]: 2026-02-02 11:48:08.342 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:08 compute-0 ceph-mon[74676]: pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:08.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:48:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:09.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:10.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:10 compute-0 ceph-mon[74676]: pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:10 compute-0 nova_compute[251290]: 2026-02-02 11:48:10.976 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:11.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:12.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:12 compute-0 sudo[273599]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:12 compute-0 sshd-session[273597]: Received disconnect from 192.168.122.10 port 32804:11: disconnected by user
Feb 02 11:48:12 compute-0 sshd-session[273597]: Disconnected from user zuul 192.168.122.10 port 32804
Feb 02 11:48:12 compute-0 sshd-session[273594]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:48:12 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Feb 02 11:48:12 compute-0 systemd[1]: session-56.scope: Consumed 3min 17.462s CPU time, 747.0M memory peak, read 260.2M from disk, written 97.5M to disk.
Feb 02 11:48:12 compute-0 systemd-logind[793]: Session 56 logged out. Waiting for processes to exit.
Feb 02 11:48:12 compute-0 systemd-logind[793]: Removed session 56.
Feb 02 11:48:12 compute-0 ceph-mon[74676]: pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:12 compute-0 sshd-session[281863]: Accepted publickey for zuul from 192.168.122.10 port 58650 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:48:12 compute-0 systemd-logind[793]: New session 57 of user zuul.
Feb 02 11:48:12 compute-0 systemd[1]: Started Session 57 of User zuul.
Feb 02 11:48:12 compute-0 sshd-session[281863]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:48:12 compute-0 sudo[281867]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-02-02-ojitjsq.tar.xz
Feb 02 11:48:12 compute-0 sudo[281867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:48:12 compute-0 sudo[281867]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:12 compute-0 sshd-session[281866]: Received disconnect from 192.168.122.10 port 58650:11: disconnected by user
Feb 02 11:48:12 compute-0 sshd-session[281866]: Disconnected from user zuul 192.168.122.10 port 58650
Feb 02 11:48:12 compute-0 sshd-session[281863]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:48:12 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Feb 02 11:48:12 compute-0 systemd-logind[793]: Session 57 logged out. Waiting for processes to exit.
Feb 02 11:48:12 compute-0 systemd-logind[793]: Removed session 57.
Feb 02 11:48:12 compute-0 sshd-session[281892]: Accepted publickey for zuul from 192.168.122.10 port 58666 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:48:12 compute-0 systemd-logind[793]: New session 58 of user zuul.
Feb 02 11:48:12 compute-0 systemd[1]: Started Session 58 of User zuul.
Feb 02 11:48:12 compute-0 sshd-session[281892]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:48:12 compute-0 sudo[281896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Feb 02 11:48:12 compute-0 sudo[281896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:48:12 compute-0 sudo[281896]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:12 compute-0 sshd-session[281895]: Received disconnect from 192.168.122.10 port 58666:11: disconnected by user
Feb 02 11:48:12 compute-0 sshd-session[281895]: Disconnected from user zuul 192.168.122.10 port 58666
Feb 02 11:48:12 compute-0 sshd-session[281892]: pam_unix(sshd:session): session closed for user zuul
Feb 02 11:48:12 compute-0 systemd-logind[793]: Session 58 logged out. Waiting for processes to exit.
Feb 02 11:48:12 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Feb 02 11:48:12 compute-0 systemd-logind[793]: Removed session 58.
Feb 02 11:48:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:13 compute-0 nova_compute[251290]: 2026-02-02 11:48:13.348 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:13.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:48:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:14.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:48:14 compute-0 sudo[281924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:48:14 compute-0 sudo[281924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:14 compute-0 sudo[281924]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:14 compute-0 ceph-mon[74676]: pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:14 compute-0 sudo[281949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:48:14 compute-0 sudo[281949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:48:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:48:15 compute-0 sudo[281949]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:48:15 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:48:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:48:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:48:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:48:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:48:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:48:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:48:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:48:15 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:48:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:48:15 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:48:15 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:48:15 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:48:15 compute-0 sudo[282007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:48:15 compute-0 sudo[282007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:15 compute-0 sudo[282007]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:15 compute-0 sudo[282032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:48:15 compute-0 sudo[282032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:48:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:48:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:48:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:48:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:48:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:48:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:48:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:48:15 compute-0 podman[282097]: 2026-02-02 11:48:15.747815824 +0000 UTC m=+0.048449530 container create 45005e63ed7237290ea8bb80a5dd49cb96766522f69ba59d52b7eb9a097b0f4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carver, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:48:15 compute-0 systemd[1]: Started libpod-conmon-45005e63ed7237290ea8bb80a5dd49cb96766522f69ba59d52b7eb9a097b0f4e.scope.
Feb 02 11:48:15 compute-0 podman[282097]: 2026-02-02 11:48:15.723906729 +0000 UTC m=+0.024540455 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:48:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:48:15 compute-0 podman[282097]: 2026-02-02 11:48:15.85298661 +0000 UTC m=+0.153620326 container init 45005e63ed7237290ea8bb80a5dd49cb96766522f69ba59d52b7eb9a097b0f4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carver, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:48:15 compute-0 podman[282097]: 2026-02-02 11:48:15.862562964 +0000 UTC m=+0.163196660 container start 45005e63ed7237290ea8bb80a5dd49cb96766522f69ba59d52b7eb9a097b0f4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:48:15 compute-0 podman[282097]: 2026-02-02 11:48:15.868610868 +0000 UTC m=+0.169244584 container attach 45005e63ed7237290ea8bb80a5dd49cb96766522f69ba59d52b7eb9a097b0f4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:48:15 compute-0 pedantic_carver[282113]: 167 167
Feb 02 11:48:15 compute-0 systemd[1]: libpod-45005e63ed7237290ea8bb80a5dd49cb96766522f69ba59d52b7eb9a097b0f4e.scope: Deactivated successfully.
Feb 02 11:48:15 compute-0 podman[282097]: 2026-02-02 11:48:15.870205593 +0000 UTC m=+0.170839299 container died 45005e63ed7237290ea8bb80a5dd49cb96766522f69ba59d52b7eb9a097b0f4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carver, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:48:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc7a5bcb872bc399d981720d799c7266aa96ad184ec47e79fc783f3f2eebdc07-merged.mount: Deactivated successfully.
Feb 02 11:48:15 compute-0 podman[282097]: 2026-02-02 11:48:15.930353358 +0000 UTC m=+0.230987054 container remove 45005e63ed7237290ea8bb80a5dd49cb96766522f69ba59d52b7eb9a097b0f4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carver, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:48:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:15.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:15 compute-0 systemd[1]: libpod-conmon-45005e63ed7237290ea8bb80a5dd49cb96766522f69ba59d52b7eb9a097b0f4e.scope: Deactivated successfully.
Feb 02 11:48:15 compute-0 nova_compute[251290]: 2026-02-02 11:48:15.978 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:16 compute-0 podman[282136]: 2026-02-02 11:48:16.094373091 +0000 UTC m=+0.051349723 container create ec1b5448995a02e444d3360c8a22a68eeff444fb082e290757478c6f4d2a2d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_sammet, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:48:16 compute-0 systemd[1]: Started libpod-conmon-ec1b5448995a02e444d3360c8a22a68eeff444fb082e290757478c6f4d2a2d80.scope.
Feb 02 11:48:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:16.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:16 compute-0 podman[282136]: 2026-02-02 11:48:16.074376467 +0000 UTC m=+0.031353139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:48:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2ce5fb3e5d925d5fbdb997b58f610d7352c5dbfd4f0f0ca20167995c294c6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2ce5fb3e5d925d5fbdb997b58f610d7352c5dbfd4f0f0ca20167995c294c6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2ce5fb3e5d925d5fbdb997b58f610d7352c5dbfd4f0f0ca20167995c294c6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2ce5fb3e5d925d5fbdb997b58f610d7352c5dbfd4f0f0ca20167995c294c6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2ce5fb3e5d925d5fbdb997b58f610d7352c5dbfd4f0f0ca20167995c294c6e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:16 compute-0 podman[282136]: 2026-02-02 11:48:16.191901887 +0000 UTC m=+0.148878559 container init ec1b5448995a02e444d3360c8a22a68eeff444fb082e290757478c6f4d2a2d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:48:16 compute-0 podman[282136]: 2026-02-02 11:48:16.202429909 +0000 UTC m=+0.159406561 container start ec1b5448995a02e444d3360c8a22a68eeff444fb082e290757478c6f4d2a2d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_sammet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:48:16 compute-0 podman[282136]: 2026-02-02 11:48:16.207141914 +0000 UTC m=+0.164118556 container attach ec1b5448995a02e444d3360c8a22a68eeff444fb082e290757478c6f4d2a2d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_sammet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb 02 11:48:16 compute-0 awesome_sammet[282153]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:48:16 compute-0 awesome_sammet[282153]: --> All data devices are unavailable
Feb 02 11:48:16 compute-0 ceph-mon[74676]: pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:16 compute-0 ceph-mon[74676]: pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:16 compute-0 systemd[1]: libpod-ec1b5448995a02e444d3360c8a22a68eeff444fb082e290757478c6f4d2a2d80.scope: Deactivated successfully.
Feb 02 11:48:16 compute-0 conmon[282153]: conmon ec1b5448995a02e444d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec1b5448995a02e444d3360c8a22a68eeff444fb082e290757478c6f4d2a2d80.scope/container/memory.events
Feb 02 11:48:16 compute-0 podman[282136]: 2026-02-02 11:48:16.561940627 +0000 UTC m=+0.518917289 container died ec1b5448995a02e444d3360c8a22a68eeff444fb082e290757478c6f4d2a2d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e2ce5fb3e5d925d5fbdb997b58f610d7352c5dbfd4f0f0ca20167995c294c6e-merged.mount: Deactivated successfully.
Feb 02 11:48:16 compute-0 podman[282136]: 2026-02-02 11:48:16.612796625 +0000 UTC m=+0.569773267 container remove ec1b5448995a02e444d3360c8a22a68eeff444fb082e290757478c6f4d2a2d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:48:16 compute-0 systemd[1]: libpod-conmon-ec1b5448995a02e444d3360c8a22a68eeff444fb082e290757478c6f4d2a2d80.scope: Deactivated successfully.
Feb 02 11:48:16 compute-0 sudo[282032]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:16 compute-0 sudo[282181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:48:16 compute-0 sudo[282181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:16 compute-0 sudo[282181]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:16 compute-0 sudo[282206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:48:16 compute-0 sudo[282206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:16] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:48:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:16] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:48:17 compute-0 podman[282272]: 2026-02-02 11:48:17.184478875 +0000 UTC m=+0.037394953 container create 4144d0de6659bdbc4911aab8fe84ed4c62f9400c9276498a4fe7627d0cf25124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_galileo, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:48:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:17.216Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:48:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:17.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:48:17 compute-0 systemd[1]: Started libpod-conmon-4144d0de6659bdbc4911aab8fe84ed4c62f9400c9276498a4fe7627d0cf25124.scope.
Feb 02 11:48:17 compute-0 podman[282272]: 2026-02-02 11:48:17.167829408 +0000 UTC m=+0.020745516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:48:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:48:17 compute-0 podman[282272]: 2026-02-02 11:48:17.285017028 +0000 UTC m=+0.137933126 container init 4144d0de6659bdbc4911aab8fe84ed4c62f9400c9276498a4fe7627d0cf25124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_galileo, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:48:17 compute-0 podman[282272]: 2026-02-02 11:48:17.292609266 +0000 UTC m=+0.145525344 container start 4144d0de6659bdbc4911aab8fe84ed4c62f9400c9276498a4fe7627d0cf25124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_galileo, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:48:17 compute-0 podman[282272]: 2026-02-02 11:48:17.296917289 +0000 UTC m=+0.149833457 container attach 4144d0de6659bdbc4911aab8fe84ed4c62f9400c9276498a4fe7627d0cf25124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:48:17 compute-0 admiring_galileo[282288]: 167 167
Feb 02 11:48:17 compute-0 systemd[1]: libpod-4144d0de6659bdbc4911aab8fe84ed4c62f9400c9276498a4fe7627d0cf25124.scope: Deactivated successfully.
Feb 02 11:48:17 compute-0 podman[282272]: 2026-02-02 11:48:17.299820012 +0000 UTC m=+0.152736090 container died 4144d0de6659bdbc4911aab8fe84ed4c62f9400c9276498a4fe7627d0cf25124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_galileo, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf26bdfa9ab68879a02b7a5c7f35d2855af712ab6926c07ea524086114bd9876-merged.mount: Deactivated successfully.
Feb 02 11:48:17 compute-0 podman[282272]: 2026-02-02 11:48:17.340167219 +0000 UTC m=+0.193083297 container remove 4144d0de6659bdbc4911aab8fe84ed4c62f9400c9276498a4fe7627d0cf25124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_galileo, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:48:17 compute-0 systemd[1]: libpod-conmon-4144d0de6659bdbc4911aab8fe84ed4c62f9400c9276498a4fe7627d0cf25124.scope: Deactivated successfully.
Feb 02 11:48:17 compute-0 podman[282310]: 2026-02-02 11:48:17.494862475 +0000 UTC m=+0.046296579 container create 1a6773fc415eee3886d729bcfe33dbafdead8fe77b0793eb5e37e5133175e935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_joliot, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:48:17 compute-0 systemd[1]: Started libpod-conmon-1a6773fc415eee3886d729bcfe33dbafdead8fe77b0793eb5e37e5133175e935.scope.
Feb 02 11:48:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:48:17 compute-0 podman[282310]: 2026-02-02 11:48:17.47726067 +0000 UTC m=+0.028694804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6b2f80be1f8a1715b083ef4812908bab5258a891be4dcc4a9befa1742c8dd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6b2f80be1f8a1715b083ef4812908bab5258a891be4dcc4a9befa1742c8dd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6b2f80be1f8a1715b083ef4812908bab5258a891be4dcc4a9befa1742c8dd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6b2f80be1f8a1715b083ef4812908bab5258a891be4dcc4a9befa1742c8dd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:17 compute-0 podman[282310]: 2026-02-02 11:48:17.585480993 +0000 UTC m=+0.136915127 container init 1a6773fc415eee3886d729bcfe33dbafdead8fe77b0793eb5e37e5133175e935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:48:17 compute-0 podman[282310]: 2026-02-02 11:48:17.592423462 +0000 UTC m=+0.143857566 container start 1a6773fc415eee3886d729bcfe33dbafdead8fe77b0793eb5e37e5133175e935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_joliot, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:48:17 compute-0 podman[282310]: 2026-02-02 11:48:17.595924992 +0000 UTC m=+0.147359116 container attach 1a6773fc415eee3886d729bcfe33dbafdead8fe77b0793eb5e37e5133175e935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:48:17 compute-0 infallible_joliot[282326]: {
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:     "1": [
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:         {
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "devices": [
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "/dev/loop3"
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             ],
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "lv_name": "ceph_lv0",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "lv_size": "21470642176",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "name": "ceph_lv0",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "tags": {
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.cluster_name": "ceph",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.crush_device_class": "",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.encrypted": "0",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.osd_id": "1",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.type": "block",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.vdo": "0",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:                 "ceph.with_tpm": "0"
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             },
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "type": "block",
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:             "vg_name": "ceph_vg0"
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:         }
Feb 02 11:48:17 compute-0 infallible_joliot[282326]:     ]
Feb 02 11:48:17 compute-0 infallible_joliot[282326]: }
Feb 02 11:48:17 compute-0 systemd[1]: libpod-1a6773fc415eee3886d729bcfe33dbafdead8fe77b0793eb5e37e5133175e935.scope: Deactivated successfully.
Feb 02 11:48:17 compute-0 podman[282310]: 2026-02-02 11:48:17.917455841 +0000 UTC m=+0.468889945 container died 1a6773fc415eee3886d729bcfe33dbafdead8fe77b0793eb5e37e5133175e935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_joliot, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:48:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:17.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf6b2f80be1f8a1715b083ef4812908bab5258a891be4dcc4a9befa1742c8dd3-merged.mount: Deactivated successfully.
Feb 02 11:48:17 compute-0 podman[282310]: 2026-02-02 11:48:17.96379772 +0000 UTC m=+0.515231814 container remove 1a6773fc415eee3886d729bcfe33dbafdead8fe77b0793eb5e37e5133175e935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_joliot, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:48:17 compute-0 systemd[1]: libpod-conmon-1a6773fc415eee3886d729bcfe33dbafdead8fe77b0793eb5e37e5133175e935.scope: Deactivated successfully.
Feb 02 11:48:18 compute-0 sudo[282206]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:18 compute-0 sudo[282349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:48:18 compute-0 sudo[282349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:18 compute-0 sudo[282349]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:18 compute-0 sudo[282374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:48:18 compute-0 sudo[282374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:18.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:18 compute-0 nova_compute[251290]: 2026-02-02 11:48:18.352 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:18 compute-0 podman[282440]: 2026-02-02 11:48:18.505906623 +0000 UTC m=+0.044690542 container create f7675523628c58fe318f543a26b6d8b00a50dda7096442f9937a5bbfa0ed4def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_golick, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:48:18 compute-0 systemd[1]: Started libpod-conmon-f7675523628c58fe318f543a26b6d8b00a50dda7096442f9937a5bbfa0ed4def.scope.
Feb 02 11:48:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:48:18 compute-0 ceph-mon[74676]: pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:18 compute-0 podman[282440]: 2026-02-02 11:48:18.486932459 +0000 UTC m=+0.025716398 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:48:18 compute-0 podman[282440]: 2026-02-02 11:48:18.587874464 +0000 UTC m=+0.126658413 container init f7675523628c58fe318f543a26b6d8b00a50dda7096442f9937a5bbfa0ed4def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:48:18 compute-0 podman[282440]: 2026-02-02 11:48:18.597941222 +0000 UTC m=+0.136725141 container start f7675523628c58fe318f543a26b6d8b00a50dda7096442f9937a5bbfa0ed4def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_golick, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:48:18 compute-0 podman[282440]: 2026-02-02 11:48:18.602528504 +0000 UTC m=+0.141312453 container attach f7675523628c58fe318f543a26b6d8b00a50dda7096442f9937a5bbfa0ed4def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:48:18 compute-0 compassionate_golick[282457]: 167 167
Feb 02 11:48:18 compute-0 systemd[1]: libpod-f7675523628c58fe318f543a26b6d8b00a50dda7096442f9937a5bbfa0ed4def.scope: Deactivated successfully.
Feb 02 11:48:18 compute-0 conmon[282457]: conmon f7675523628c58fe318f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7675523628c58fe318f543a26b6d8b00a50dda7096442f9937a5bbfa0ed4def.scope/container/memory.events
Feb 02 11:48:18 compute-0 podman[282440]: 2026-02-02 11:48:18.605761646 +0000 UTC m=+0.144545565 container died f7675523628c58fe318f543a26b6d8b00a50dda7096442f9937a5bbfa0ed4def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_golick, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 11:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5513e6cd06448f1a6664264f4a82a5acdc32d726fdd8126e10f5946069ece83-merged.mount: Deactivated successfully.
Feb 02 11:48:18 compute-0 podman[282440]: 2026-02-02 11:48:18.653997329 +0000 UTC m=+0.192781258 container remove f7675523628c58fe318f543a26b6d8b00a50dda7096442f9937a5bbfa0ed4def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb 02 11:48:18 compute-0 systemd[1]: libpod-conmon-f7675523628c58fe318f543a26b6d8b00a50dda7096442f9937a5bbfa0ed4def.scope: Deactivated successfully.
Feb 02 11:48:18 compute-0 podman[282479]: 2026-02-02 11:48:18.795442895 +0000 UTC m=+0.049312035 container create 5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 11:48:18 compute-0 systemd[1]: Started libpod-conmon-5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2.scope.
Feb 02 11:48:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:48:18 compute-0 podman[282479]: 2026-02-02 11:48:18.772102116 +0000 UTC m=+0.025971256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b51cf1e2890d10934b7e606da31d55b77ed11d84027c39a94c595af1c4581be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b51cf1e2890d10934b7e606da31d55b77ed11d84027c39a94c595af1c4581be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b51cf1e2890d10934b7e606da31d55b77ed11d84027c39a94c595af1c4581be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b51cf1e2890d10934b7e606da31d55b77ed11d84027c39a94c595af1c4581be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:48:18 compute-0 podman[282479]: 2026-02-02 11:48:18.891140549 +0000 UTC m=+0.145009709 container init 5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_colden, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:48:18 compute-0 podman[282479]: 2026-02-02 11:48:18.897988845 +0000 UTC m=+0.151857985 container start 5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_colden, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:48:18 compute-0 podman[282479]: 2026-02-02 11:48:18.902094693 +0000 UTC m=+0.155963853 container attach 5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_colden, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:48:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:18.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:48:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:19 compute-0 lvm[282570]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:48:19 compute-0 lvm[282570]: VG ceph_vg0 finished
Feb 02 11:48:19 compute-0 nostalgic_colden[282495]: {}
Feb 02 11:48:19 compute-0 systemd[1]: libpod-5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2.scope: Deactivated successfully.
Feb 02 11:48:19 compute-0 conmon[282495]: conmon 5111e4265235dc3def66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2.scope/container/memory.events
Feb 02 11:48:19 compute-0 systemd[1]: libpod-5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2.scope: Consumed 1.161s CPU time.
Feb 02 11:48:19 compute-0 podman[282479]: 2026-02-02 11:48:19.683499887 +0000 UTC m=+0.937369027 container died 5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:48:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b51cf1e2890d10934b7e606da31d55b77ed11d84027c39a94c595af1c4581be-merged.mount: Deactivated successfully.
Feb 02 11:48:19 compute-0 podman[282479]: 2026-02-02 11:48:19.729190527 +0000 UTC m=+0.983059677 container remove 5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:48:19 compute-0 systemd[1]: libpod-conmon-5111e4265235dc3def6624bf7931297437021736f5ca1deec2f8ec9a06d82fb2.scope: Deactivated successfully.
Feb 02 11:48:19 compute-0 sudo[282374]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:48:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:48:19 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:48:19 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:48:19 compute-0 sudo[282586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:48:19 compute-0 sudo[282586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:19 compute-0 sudo[282586]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:48:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:19.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:48:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:48:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:20.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:48:20 compute-0 ceph-mon[74676]: pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:20 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:48:20 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:48:20 compute-0 nova_compute[251290]: 2026-02-02 11:48:20.982 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:21.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:48:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:22.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:48:22 compute-0 ceph-mon[74676]: pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:48:22.688 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:48:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:48:22.689 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:48:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:48:22.689 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:48:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:23 compute-0 nova_compute[251290]: 2026-02-02 11:48:23.356 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:23.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:24.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:24 compute-0 ceph-mon[74676]: pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:25.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:25 compute-0 nova_compute[251290]: 2026-02-02 11:48:25.982 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:26.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:26 compute-0 ceph-mon[74676]: pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:48:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:26] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:48:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:26] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:48:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:27.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:48:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:28.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:28 compute-0 sudo[282619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:48:28 compute-0 sudo[282619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:28 compute-0 sudo[282619]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:28 compute-0 nova_compute[251290]: 2026-02-02 11:48:28.360 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:28 compute-0 ceph-mon[74676]: pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:28.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:29 compute-0 podman[282645]: 2026-02-02 11:48:29.278985867 +0000 UTC m=+0.058453397 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 02 11:48:29 compute-0 podman[282646]: 2026-02-02 11:48:29.308768391 +0000 UTC m=+0.086723248 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:48:29
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['default.rgw.control', 'images', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:48:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:48:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:48:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:48:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:48:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:29.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:29.960602) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032909961256, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2462, "num_deletes": 251, "total_data_size": 4338373, "memory_usage": 4411376, "flush_reason": "Manual Compaction"}
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032909992788, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4244316, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29460, "largest_seqno": 31921, "table_properties": {"data_size": 4232246, "index_size": 7604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 29702, "raw_average_key_size": 22, "raw_value_size": 4206650, "raw_average_value_size": 3148, "num_data_blocks": 323, "num_entries": 1336, "num_filter_entries": 1336, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032710, "oldest_key_time": 1770032710, "file_creation_time": 1770032909, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 32286 microseconds, and 7802 cpu microseconds.
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:29.992881) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4244316 bytes OK
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:29.992912) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:29.994201) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:29.994229) EVENT_LOG_v1 {"time_micros": 1770032909994221, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:29.994261) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4327152, prev total WAL file size 4327152, number of live WAL files 2.
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:29.995686) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4144KB)], [65(11MB)]
Feb 02 11:48:29 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032909995797, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16498194, "oldest_snapshot_seqno": -1}
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6618 keys, 14381983 bytes, temperature: kUnknown
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032910088991, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14381983, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14338336, "index_size": 25999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 170115, "raw_average_key_size": 25, "raw_value_size": 14219884, "raw_average_value_size": 2148, "num_data_blocks": 1043, "num_entries": 6618, "num_filter_entries": 6618, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770032909, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:30.089317) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14381983 bytes
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:30.090499) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.9 rd, 154.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 11.7 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 7138, records dropped: 520 output_compression: NoCompression
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:30.090520) EVENT_LOG_v1 {"time_micros": 1770032910090509, "job": 36, "event": "compaction_finished", "compaction_time_micros": 93280, "compaction_time_cpu_micros": 25230, "output_level": 6, "num_output_files": 1, "total_output_size": 14381983, "num_input_records": 7138, "num_output_records": 6618, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032910091429, "job": 36, "event": "table_file_deletion", "file_number": 67}
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032910093689, "job": 36, "event": "table_file_deletion", "file_number": 65}
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:29.995555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:30.093888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:30.093897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:30.093899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:30.093901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:48:30 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:48:30.093903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:48:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:30.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:48:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:48:30 compute-0 ceph-mon[74676]: pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:30 compute-0 nova_compute[251290]: 2026-02-02 11:48:30.985 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:31 compute-0 ceph-mon[74676]: pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:31.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:32.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:33 compute-0 nova_compute[251290]: 2026-02-02 11:48:33.364 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:33.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:34.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:34 compute-0 ceph-mon[74676]: pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:35.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:35 compute-0 nova_compute[251290]: 2026-02-02 11:48:35.988 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:36.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:36 compute-0 ceph-mon[74676]: pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:36] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:48:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:36] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:48:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:37.219Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:48:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:37.220Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:48:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:38.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:38 compute-0 nova_compute[251290]: 2026-02-02 11:48:38.367 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:38 compute-0 ceph-mon[74676]: pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:38.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:48:39 compute-0 nova_compute[251290]: 2026-02-02 11:48:39.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:48:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:39 compute-0 ceph-mon[74676]: pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:39 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/562125375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:48:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:39.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.067 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.068 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.068 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.068 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.068 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:48:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:40.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:48:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779626291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.564 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.743 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.745 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4471MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.745 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.745 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:48:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3056260696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:48:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2268514334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:48:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2779626291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:48:40 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2838053723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.838 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.839 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.858 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:48:40 compute-0 nova_compute[251290]: 2026-02-02 11:48:40.987 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:48:41 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547616786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:48:41 compute-0 nova_compute[251290]: 2026-02-02 11:48:41.344 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:48:41 compute-0 nova_compute[251290]: 2026-02-02 11:48:41.349 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:48:41 compute-0 nova_compute[251290]: 2026-02-02 11:48:41.377 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:48:41 compute-0 nova_compute[251290]: 2026-02-02 11:48:41.379 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:48:41 compute-0 nova_compute[251290]: 2026-02-02 11:48:41.379 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:48:41 compute-0 ceph-mon[74676]: pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/547616786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:48:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:41.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:42.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:43 compute-0 nova_compute[251290]: 2026-02-02 11:48:43.475 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:43.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:44.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:44 compute-0 ceph-mon[74676]: pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2582033194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:48:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2582033194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:48:44 compute-0 nova_compute[251290]: 2026-02-02 11:48:44.379 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:48:44 compute-0 nova_compute[251290]: 2026-02-02 11:48:44.379 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:48:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:48:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:48:45 compute-0 nova_compute[251290]: 2026-02-02 11:48:45.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:48:45 compute-0 nova_compute[251290]: 2026-02-02 11:48:45.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:48:45 compute-0 nova_compute[251290]: 2026-02-02 11:48:45.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:48:45 compute-0 nova_compute[251290]: 2026-02-02 11:48:45.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:48:45 compute-0 nova_compute[251290]: 2026-02-02 11:48:45.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:48:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:48:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:48:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:45.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:48:45 compute-0 nova_compute[251290]: 2026-02-02 11:48:45.988 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:46 compute-0 ceph-mon[74676]: pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:46] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:48:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:46] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:48:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:47.221Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:48:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:47.223Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:48:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:47.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:48.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:48 compute-0 ceph-mon[74676]: pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:48 compute-0 sudo[282750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:48:48 compute-0 sudo[282750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:48:48 compute-0 sudo[282750]: pam_unix(sudo:session): session closed for user root
Feb 02 11:48:48 compute-0 nova_compute[251290]: 2026-02-02 11:48:48.479 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:48.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:48:49 compute-0 nova_compute[251290]: 2026-02-02 11:48:49.021 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:48:49 compute-0 nova_compute[251290]: 2026-02-02 11:48:49.022 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:48:49 compute-0 nova_compute[251290]: 2026-02-02 11:48:49.022 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:48:49 compute-0 nova_compute[251290]: 2026-02-02 11:48:49.066 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:48:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:49.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:50.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:50 compute-0 ceph-mon[74676]: pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:50 compute-0 nova_compute[251290]: 2026-02-02 11:48:50.992 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:51.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:52.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:52 compute-0 ceph-mon[74676]: pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:53 compute-0 nova_compute[251290]: 2026-02-02 11:48:53.483 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:53.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:54.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:54 compute-0 ceph-mon[74676]: pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:55 compute-0 nova_compute[251290]: 2026-02-02 11:48:55.994 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:55.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:48:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:48:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:48:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:48:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:48:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:56.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:56 compute-0 ceph-mon[74676]: pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:48:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:56] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:48:57 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:48:56] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:48:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:57.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:48:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:57.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:48:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:48:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:57.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:48:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:48:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:48:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:48:58.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:48:58 compute-0 ceph-mon[74676]: pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:48:58 compute-0 nova_compute[251290]: 2026-02-02 11:48:58.487 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:48:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:58.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:48:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:48:58.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:48:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:48:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:48:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:48:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:48:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:48:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:48:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:48:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:48:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:49:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.004000115s ======
Feb 02 11:49:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:48:59.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000115s
Feb 02 11:49:00 compute-0 podman[282788]: 2026-02-02 11:49:00.024757224 +0000 UTC m=+0.056878352 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:49:00 compute-0 podman[282789]: 2026-02-02 11:49:00.056569996 +0000 UTC m=+0.085133192 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Feb 02 11:49:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:00.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:00 compute-0 ceph-mon[74676]: pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:49:00 compute-0 nova_compute[251290]: 2026-02-02 11:49:00.995 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:01 compute-0 anacron[30961]: Job `cron.monthly' started
Feb 02 11:49:01 compute-0 anacron[30961]: Job `cron.monthly' terminated
Feb 02 11:49:01 compute-0 anacron[30961]: Normal exit (3 jobs run)
Feb 02 11:49:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:49:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:02.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:49:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:49:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:02.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:49:02 compute-0 ceph-mon[74676]: pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:03 compute-0 nova_compute[251290]: 2026-02-02 11:49:03.490 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:04.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:04.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:04 compute-0 ceph-mon[74676]: pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:05 compute-0 nova_compute[251290]: 2026-02-02 11:49:05.997 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:49:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:06.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:49:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:06.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:06 compute-0 ceph-mon[74676]: pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:49:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:49:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:07.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:08.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:08.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:08 compute-0 sudo[282842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:49:08 compute-0 sudo[282842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:08 compute-0 sudo[282842]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:08 compute-0 ceph-mon[74676]: pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:08 compute-0 nova_compute[251290]: 2026-02-02 11:49:08.494 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:08.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:49:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:08.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:10.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:49:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:10.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:49:10 compute-0 ceph-mon[74676]: pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:11 compute-0 nova_compute[251290]: 2026-02-02 11:49:10.999 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:12.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:12.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:12 compute-0 ceph-mon[74676]: pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Feb 02 11:49:13 compute-0 nova_compute[251290]: 2026-02-02 11:49:13.497 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:49:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:14.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:49:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:14.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:14 compute-0 ceph-mon[74676]: pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Feb 02 11:49:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:49:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:49:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:49:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:16 compute-0 nova_compute[251290]: 2026-02-02 11:49:16.000 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:49:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:16.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:49:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:16.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:16 compute-0 ceph-mon[74676]: pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:49:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:49:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:17.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:18.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:18.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:18 compute-0 nova_compute[251290]: 2026-02-02 11:49:18.501 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:18 compute-0 ceph-mon[74676]: pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:18.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:49:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:18.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:49:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:20.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:49:20 compute-0 sudo[282879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:49:20 compute-0 sudo[282879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:20 compute-0 sudo[282879]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:20 compute-0 sudo[282904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:49:20 compute-0 sudo[282904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:49:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:20.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:49:20 compute-0 ceph-mon[74676]: pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:20 compute-0 sudo[282904]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:49:20 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:49:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:49:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:49:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:49:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:49:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:49:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:49:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:49:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:49:20 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:49:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:49:20 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:49:20 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:49:20 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:49:20 compute-0 sudo[282962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:49:20 compute-0 sudo[282962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:20 compute-0 sudo[282962]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:20 compute-0 sudo[282987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:49:20 compute-0 sudo[282987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:21 compute-0 nova_compute[251290]: 2026-02-02 11:49:21.002 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:21 compute-0 podman[283053]: 2026-02-02 11:49:21.26356781 +0000 UTC m=+0.041357157 container create e383b8d00fa22b75b8b63d49331244505b1538dfbe2a973c2a0212e86fdc61ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_booth, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:49:21 compute-0 systemd[1]: Started libpod-conmon-e383b8d00fa22b75b8b63d49331244505b1538dfbe2a973c2a0212e86fdc61ed.scope.
Feb 02 11:49:21 compute-0 podman[283053]: 2026-02-02 11:49:21.244282947 +0000 UTC m=+0.022072314 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:49:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:49:21 compute-0 podman[283053]: 2026-02-02 11:49:21.372140792 +0000 UTC m=+0.149930159 container init e383b8d00fa22b75b8b63d49331244505b1538dfbe2a973c2a0212e86fdc61ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_booth, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:49:21 compute-0 podman[283053]: 2026-02-02 11:49:21.378830174 +0000 UTC m=+0.156619521 container start e383b8d00fa22b75b8b63d49331244505b1538dfbe2a973c2a0212e86fdc61ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_booth, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:49:21 compute-0 podman[283053]: 2026-02-02 11:49:21.382450318 +0000 UTC m=+0.160239665 container attach e383b8d00fa22b75b8b63d49331244505b1538dfbe2a973c2a0212e86fdc61ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_booth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:49:21 compute-0 epic_booth[283069]: 167 167
Feb 02 11:49:21 compute-0 systemd[1]: libpod-e383b8d00fa22b75b8b63d49331244505b1538dfbe2a973c2a0212e86fdc61ed.scope: Deactivated successfully.
Feb 02 11:49:21 compute-0 podman[283053]: 2026-02-02 11:49:21.387608746 +0000 UTC m=+0.165398093 container died e383b8d00fa22b75b8b63d49331244505b1538dfbe2a973c2a0212e86fdc61ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_booth, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 11:49:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e61a9859cbaaf1cb7c4fe8613a6bd266df32a145f1d6d7e7669a594252fa25c7-merged.mount: Deactivated successfully.
Feb 02 11:49:21 compute-0 podman[283053]: 2026-02-02 11:49:21.429235409 +0000 UTC m=+0.207024756 container remove e383b8d00fa22b75b8b63d49331244505b1538dfbe2a973c2a0212e86fdc61ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_booth, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:49:21 compute-0 systemd[1]: libpod-conmon-e383b8d00fa22b75b8b63d49331244505b1538dfbe2a973c2a0212e86fdc61ed.scope: Deactivated successfully.
Feb 02 11:49:21 compute-0 podman[283093]: 2026-02-02 11:49:21.563632283 +0000 UTC m=+0.039838113 container create 6d1ca5071ef364253e50bf5ce6052a8bd70988ef312fa2fb8e12f52abace008a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb 02 11:49:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:49:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:49:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:49:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:49:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:49:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:49:21 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:49:21 compute-0 systemd[1]: Started libpod-conmon-6d1ca5071ef364253e50bf5ce6052a8bd70988ef312fa2fb8e12f52abace008a.scope.
Feb 02 11:49:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4c3d9cc0d459716d973ecd1ad31593a7ffea4cc130761162a89a08df39de1aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4c3d9cc0d459716d973ecd1ad31593a7ffea4cc130761162a89a08df39de1aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4c3d9cc0d459716d973ecd1ad31593a7ffea4cc130761162a89a08df39de1aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4c3d9cc0d459716d973ecd1ad31593a7ffea4cc130761162a89a08df39de1aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4c3d9cc0d459716d973ecd1ad31593a7ffea4cc130761162a89a08df39de1aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:21 compute-0 podman[283093]: 2026-02-02 11:49:21.546885623 +0000 UTC m=+0.023091473 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:49:21 compute-0 podman[283093]: 2026-02-02 11:49:21.658482523 +0000 UTC m=+0.134688383 container init 6d1ca5071ef364253e50bf5ce6052a8bd70988ef312fa2fb8e12f52abace008a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:49:21 compute-0 podman[283093]: 2026-02-02 11:49:21.665578396 +0000 UTC m=+0.141784226 container start 6d1ca5071ef364253e50bf5ce6052a8bd70988ef312fa2fb8e12f52abace008a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb 02 11:49:21 compute-0 podman[283093]: 2026-02-02 11:49:21.668603013 +0000 UTC m=+0.144808873 container attach 6d1ca5071ef364253e50bf5ce6052a8bd70988ef312fa2fb8e12f52abace008a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:49:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:21 compute-0 musing_jackson[283109]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:49:21 compute-0 musing_jackson[283109]: --> All data devices are unavailable
Feb 02 11:49:22 compute-0 systemd[1]: libpod-6d1ca5071ef364253e50bf5ce6052a8bd70988ef312fa2fb8e12f52abace008a.scope: Deactivated successfully.
Feb 02 11:49:22 compute-0 podman[283093]: 2026-02-02 11:49:22.004537195 +0000 UTC m=+0.480743025 container died 6d1ca5071ef364253e50bf5ce6052a8bd70988ef312fa2fb8e12f52abace008a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:49:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4c3d9cc0d459716d973ecd1ad31593a7ffea4cc130761162a89a08df39de1aa-merged.mount: Deactivated successfully.
Feb 02 11:49:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:22.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:22 compute-0 podman[283093]: 2026-02-02 11:49:22.050358488 +0000 UTC m=+0.526564318 container remove 6d1ca5071ef364253e50bf5ce6052a8bd70988ef312fa2fb8e12f52abace008a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Feb 02 11:49:22 compute-0 systemd[1]: libpod-conmon-6d1ca5071ef364253e50bf5ce6052a8bd70988ef312fa2fb8e12f52abace008a.scope: Deactivated successfully.
Feb 02 11:49:22 compute-0 sudo[282987]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:22 compute-0 sudo[283136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:49:22 compute-0 sudo[283136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:22 compute-0 sudo[283136]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:22 compute-0 sudo[283161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:49:22 compute-0 sudo[283161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:22.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:22 compute-0 ceph-mon[74676]: pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:49:22 compute-0 podman[283227]: 2026-02-02 11:49:22.615139112 +0000 UTC m=+0.045333991 container create d71a5717c0f85f50b3d6eb772c4c7d4ad690da959ef4e32928ea447403c65262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:49:22 compute-0 systemd[1]: Started libpod-conmon-d71a5717c0f85f50b3d6eb772c4c7d4ad690da959ef4e32928ea447403c65262.scope.
Feb 02 11:49:22 compute-0 podman[283227]: 2026-02-02 11:49:22.594680105 +0000 UTC m=+0.024875004 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:49:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:49:22.689 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:49:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:49:22.692 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:49:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:49:22.692 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:49:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:49:22 compute-0 podman[283227]: 2026-02-02 11:49:22.73504574 +0000 UTC m=+0.165240639 container init d71a5717c0f85f50b3d6eb772c4c7d4ad690da959ef4e32928ea447403c65262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:49:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:49:22 compute-0 podman[283227]: 2026-02-02 11:49:22.741560237 +0000 UTC m=+0.171755116 container start d71a5717c0f85f50b3d6eb772c4c7d4ad690da959ef4e32928ea447403c65262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb 02 11:49:22 compute-0 podman[283227]: 2026-02-02 11:49:22.745544421 +0000 UTC m=+0.175739320 container attach d71a5717c0f85f50b3d6eb772c4c7d4ad690da959ef4e32928ea447403c65262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:49:22 compute-0 pensive_swirles[283243]: 167 167
Feb 02 11:49:22 compute-0 systemd[1]: libpod-d71a5717c0f85f50b3d6eb772c4c7d4ad690da959ef4e32928ea447403c65262.scope: Deactivated successfully.
Feb 02 11:49:22 compute-0 conmon[283243]: conmon d71a5717c0f85f50b3d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d71a5717c0f85f50b3d6eb772c4c7d4ad690da959ef4e32928ea447403c65262.scope/container/memory.events
Feb 02 11:49:22 compute-0 podman[283227]: 2026-02-02 11:49:22.748389742 +0000 UTC m=+0.178584621 container died d71a5717c0f85f50b3d6eb772c4c7d4ad690da959ef4e32928ea447403c65262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Feb 02 11:49:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-025f71e92a2eaf8ee9327651508f9c27743708fd2303d0bb42aae1765c7bd110-merged.mount: Deactivated successfully.
Feb 02 11:49:22 compute-0 podman[283227]: 2026-02-02 11:49:22.864984255 +0000 UTC m=+0.295179134 container remove d71a5717c0f85f50b3d6eb772c4c7d4ad690da959ef4e32928ea447403c65262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:49:22 compute-0 systemd[1]: libpod-conmon-d71a5717c0f85f50b3d6eb772c4c7d4ad690da959ef4e32928ea447403c65262.scope: Deactivated successfully.
Feb 02 11:49:23 compute-0 podman[283269]: 2026-02-02 11:49:23.006989997 +0000 UTC m=+0.048762359 container create 9c116aa30b8249ed58f708919361309ee43dbd9cc0991476d260868580b6a42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 11:49:23 compute-0 systemd[1]: Started libpod-conmon-9c116aa30b8249ed58f708919361309ee43dbd9cc0991476d260868580b6a42c.scope.
Feb 02 11:49:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e95db82d90152ba8203becbf0336777f48340ee5b36615e355c304375690ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e95db82d90152ba8203becbf0336777f48340ee5b36615e355c304375690ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e95db82d90152ba8203becbf0336777f48340ee5b36615e355c304375690ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e95db82d90152ba8203becbf0336777f48340ee5b36615e355c304375690ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:23 compute-0 podman[283269]: 2026-02-02 11:49:22.984180123 +0000 UTC m=+0.025952505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:49:23 compute-0 podman[283269]: 2026-02-02 11:49:23.087732572 +0000 UTC m=+0.129504954 container init 9c116aa30b8249ed58f708919361309ee43dbd9cc0991476d260868580b6a42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:49:23 compute-0 podman[283269]: 2026-02-02 11:49:23.093813216 +0000 UTC m=+0.135585578 container start 9c116aa30b8249ed58f708919361309ee43dbd9cc0991476d260868580b6a42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galileo, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:49:23 compute-0 podman[283269]: 2026-02-02 11:49:23.096680099 +0000 UTC m=+0.138452481 container attach 9c116aa30b8249ed58f708919361309ee43dbd9cc0991476d260868580b6a42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galileo, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:49:23 compute-0 adoring_galileo[283286]: {
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:     "1": [
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:         {
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "devices": [
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "/dev/loop3"
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             ],
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "lv_name": "ceph_lv0",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "lv_size": "21470642176",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "name": "ceph_lv0",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "tags": {
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.cluster_name": "ceph",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.crush_device_class": "",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.encrypted": "0",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.osd_id": "1",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.type": "block",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.vdo": "0",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:                 "ceph.with_tpm": "0"
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             },
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "type": "block",
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:             "vg_name": "ceph_vg0"
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:         }
Feb 02 11:49:23 compute-0 adoring_galileo[283286]:     ]
Feb 02 11:49:23 compute-0 adoring_galileo[283286]: }
Feb 02 11:49:23 compute-0 systemd[1]: libpod-9c116aa30b8249ed58f708919361309ee43dbd9cc0991476d260868580b6a42c.scope: Deactivated successfully.
Feb 02 11:49:23 compute-0 podman[283269]: 2026-02-02 11:49:23.385970753 +0000 UTC m=+0.427743135 container died 9c116aa30b8249ed58f708919361309ee43dbd9cc0991476d260868580b6a42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galileo, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 11:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7e95db82d90152ba8203becbf0336777f48340ee5b36615e355c304375690ea-merged.mount: Deactivated successfully.
Feb 02 11:49:23 compute-0 podman[283269]: 2026-02-02 11:49:23.430571842 +0000 UTC m=+0.472344204 container remove 9c116aa30b8249ed58f708919361309ee43dbd9cc0991476d260868580b6a42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:49:23 compute-0 systemd[1]: libpod-conmon-9c116aa30b8249ed58f708919361309ee43dbd9cc0991476d260868580b6a42c.scope: Deactivated successfully.
Feb 02 11:49:23 compute-0 sudo[283161]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:23 compute-0 nova_compute[251290]: 2026-02-02 11:49:23.538 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:23 compute-0 sudo[283307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:49:23 compute-0 sudo[283307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:23 compute-0 sudo[283307]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:23 compute-0 sudo[283332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:49:23 compute-0 sudo[283332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:24 compute-0 podman[283398]: 2026-02-02 11:49:24.009031808 +0000 UTC m=+0.037106105 container create d130cfaf87925a89778d17dfddef8a92f91e8e6766915bcb991e68fd90a57614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Feb 02 11:49:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:49:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:24.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:49:24 compute-0 systemd[1]: Started libpod-conmon-d130cfaf87925a89778d17dfddef8a92f91e8e6766915bcb991e68fd90a57614.scope.
Feb 02 11:49:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:49:24 compute-0 podman[283398]: 2026-02-02 11:49:23.993687628 +0000 UTC m=+0.021761945 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:49:24 compute-0 podman[283398]: 2026-02-02 11:49:24.090342659 +0000 UTC m=+0.118416956 container init d130cfaf87925a89778d17dfddef8a92f91e8e6766915bcb991e68fd90a57614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_golick, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:49:24 compute-0 podman[283398]: 2026-02-02 11:49:24.097311789 +0000 UTC m=+0.125386086 container start d130cfaf87925a89778d17dfddef8a92f91e8e6766915bcb991e68fd90a57614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:49:24 compute-0 podman[283398]: 2026-02-02 11:49:24.100685756 +0000 UTC m=+0.128760073 container attach d130cfaf87925a89778d17dfddef8a92f91e8e6766915bcb991e68fd90a57614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_golick, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:49:24 compute-0 optimistic_golick[283414]: 167 167
Feb 02 11:49:24 compute-0 systemd[1]: libpod-d130cfaf87925a89778d17dfddef8a92f91e8e6766915bcb991e68fd90a57614.scope: Deactivated successfully.
Feb 02 11:49:24 compute-0 podman[283398]: 2026-02-02 11:49:24.102170478 +0000 UTC m=+0.130244785 container died d130cfaf87925a89778d17dfddef8a92f91e8e6766915bcb991e68fd90a57614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_golick, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 11:49:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-38c54412a378fc16623534a9306ffefb5228a4871faa8691583dcc05b9954b1f-merged.mount: Deactivated successfully.
Feb 02 11:49:24 compute-0 podman[283398]: 2026-02-02 11:49:24.138112089 +0000 UTC m=+0.166186386 container remove d130cfaf87925a89778d17dfddef8a92f91e8e6766915bcb991e68fd90a57614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:49:24 compute-0 systemd[1]: libpod-conmon-d130cfaf87925a89778d17dfddef8a92f91e8e6766915bcb991e68fd90a57614.scope: Deactivated successfully.
Feb 02 11:49:24 compute-0 podman[283439]: 2026-02-02 11:49:24.264121702 +0000 UTC m=+0.035666714 container create 1672703d0f0fae78552f2ce7b83b9d5a7cfd1a8be2bc869a36c57b3a3818681f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:49:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:24.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:24 compute-0 systemd[1]: Started libpod-conmon-1672703d0f0fae78552f2ce7b83b9d5a7cfd1a8be2bc869a36c57b3a3818681f.scope.
Feb 02 11:49:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4648389ac7fc447ecd029382cf1a2f95bda0b70eb97c19a48900f3053ddadf25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4648389ac7fc447ecd029382cf1a2f95bda0b70eb97c19a48900f3053ddadf25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4648389ac7fc447ecd029382cf1a2f95bda0b70eb97c19a48900f3053ddadf25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4648389ac7fc447ecd029382cf1a2f95bda0b70eb97c19a48900f3053ddadf25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:49:24 compute-0 podman[283439]: 2026-02-02 11:49:24.338305289 +0000 UTC m=+0.109850311 container init 1672703d0f0fae78552f2ce7b83b9d5a7cfd1a8be2bc869a36c57b3a3818681f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chebyshev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb 02 11:49:24 compute-0 podman[283439]: 2026-02-02 11:49:24.344482216 +0000 UTC m=+0.116027218 container start 1672703d0f0fae78552f2ce7b83b9d5a7cfd1a8be2bc869a36c57b3a3818681f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chebyshev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:49:24 compute-0 podman[283439]: 2026-02-02 11:49:24.248302968 +0000 UTC m=+0.019847990 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:49:24 compute-0 podman[283439]: 2026-02-02 11:49:24.347912964 +0000 UTC m=+0.119457966 container attach 1672703d0f0fae78552f2ce7b83b9d5a7cfd1a8be2bc869a36c57b3a3818681f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:49:24 compute-0 ceph-mon[74676]: pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:49:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:49:24 compute-0 lvm[283531]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:49:24 compute-0 lvm[283531]: VG ceph_vg0 finished
Feb 02 11:49:25 compute-0 modest_chebyshev[283456]: {}
Feb 02 11:49:25 compute-0 systemd[1]: libpod-1672703d0f0fae78552f2ce7b83b9d5a7cfd1a8be2bc869a36c57b3a3818681f.scope: Deactivated successfully.
Feb 02 11:49:25 compute-0 systemd[1]: libpod-1672703d0f0fae78552f2ce7b83b9d5a7cfd1a8be2bc869a36c57b3a3818681f.scope: Consumed 1.031s CPU time.
Feb 02 11:49:25 compute-0 podman[283439]: 2026-02-02 11:49:25.082409513 +0000 UTC m=+0.853954525 container died 1672703d0f0fae78552f2ce7b83b9d5a7cfd1a8be2bc869a36c57b3a3818681f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chebyshev, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4648389ac7fc447ecd029382cf1a2f95bda0b70eb97c19a48900f3053ddadf25-merged.mount: Deactivated successfully.
Feb 02 11:49:25 compute-0 podman[283439]: 2026-02-02 11:49:25.132630123 +0000 UTC m=+0.904175125 container remove 1672703d0f0fae78552f2ce7b83b9d5a7cfd1a8be2bc869a36c57b3a3818681f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chebyshev, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:49:25 compute-0 systemd[1]: libpod-conmon-1672703d0f0fae78552f2ce7b83b9d5a7cfd1a8be2bc869a36c57b3a3818681f.scope: Deactivated successfully.
Feb 02 11:49:25 compute-0 sudo[283332]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:49:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:49:25 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:49:25 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:49:25 compute-0 sudo[283548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:49:25 compute-0 sudo[283548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:25 compute-0 sudo[283548]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:26 compute-0 nova_compute[251290]: 2026-02-02 11:49:26.004 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:26.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:26 compute-0 ceph-mon[74676]: pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:49:26 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:49:26 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:49:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:49:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:26.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:49:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:49:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:26] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:49:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:26] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:49:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:27.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:28.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:28 compute-0 ceph-mon[74676]: pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:49:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:28.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:28 compute-0 sudo[283576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:49:28 compute-0 sudo[283576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:28 compute-0 sudo[283576]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:28 compute-0 nova_compute[251290]: 2026-02-02 11:49:28.543 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:49:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:28.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:49:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:28.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:49:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:28.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:49:29
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.meta', '.mgr', '.nfs', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:49:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:49:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:49:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:49:29 compute-0 ceph-mon[74676]: pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:49:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:49:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:30.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:30.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:49:30 compute-0 podman[283603]: 2026-02-02 11:49:30.30279791 +0000 UTC m=+0.085262616 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 11:49:30 compute-0 podman[283604]: 2026-02-02 11:49:30.305462026 +0000 UTC m=+0.087741796 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:49:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Feb 02 11:49:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:31 compute-0 nova_compute[251290]: 2026-02-02 11:49:31.007 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:32.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:32 compute-0 ceph-mon[74676]: pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Feb 02 11:49:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:32.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:49:33 compute-0 nova_compute[251290]: 2026-02-02 11:49:33.544 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:34.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:34 compute-0 ceph-mon[74676]: pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:49:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:34.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:49:35 compute-0 sshd-session[283655]: Connection closed by 43.228.142.187 port 19554 [preauth]
Feb 02 11:49:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:36 compute-0 nova_compute[251290]: 2026-02-02 11:49:36.010 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:36.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:36 compute-0 ceph-mon[74676]: pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:49:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:36.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:49:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:36] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:49:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:36] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:49:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:37.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:38.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:38 compute-0 ceph-mon[74676]: pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:49:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:49:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:38.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:49:38 compute-0 nova_compute[251290]: 2026-02-02 11:49:38.547 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:49:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:38.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.053 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.054 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.054 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.054 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.054 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:49:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:40.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:40 compute-0 ceph-mon[74676]: pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:49:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:40.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:49:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761884415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.560 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:49:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.802 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.803 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4492MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.804 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:49:40 compute-0 nova_compute[251290]: 2026-02-02 11:49:40.804 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:49:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:41 compute-0 nova_compute[251290]: 2026-02-02 11:49:41.010 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:41 compute-0 nova_compute[251290]: 2026-02-02 11:49:41.167 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:49:41 compute-0 nova_compute[251290]: 2026-02-02 11:49:41.167 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:49:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:49:41 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/513116572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:49:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4179950250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:49:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2761884415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:49:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/644421512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:49:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/513116572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:49:41 compute-0 nova_compute[251290]: 2026-02-02 11:49:41.355 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:49:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:49:41 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835719549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:49:41 compute-0 nova_compute[251290]: 2026-02-02 11:49:41.860 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:49:41 compute-0 nova_compute[251290]: 2026-02-02 11:49:41.868 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:49:41 compute-0 nova_compute[251290]: 2026-02-02 11:49:41.892 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:49:41 compute-0 nova_compute[251290]: 2026-02-02 11:49:41.894 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:49:41 compute-0 nova_compute[251290]: 2026-02-02 11:49:41.894 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:49:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:49:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:42.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:49:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:42.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:42 compute-0 ceph-mon[74676]: pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:49:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/835719549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:49:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2854039519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:49:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:42 compute-0 nova_compute[251290]: 2026-02-02 11:49:42.895 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:43 compute-0 nova_compute[251290]: 2026-02-02 11:49:43.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:43 compute-0 nova_compute[251290]: 2026-02-02 11:49:43.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:43 compute-0 nova_compute[251290]: 2026-02-02 11:49:43.575 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:44.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:49:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150355809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:49:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:49:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150355809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:49:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:44.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:44 compute-0 ceph-mon[74676]: pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/150355809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:49:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/150355809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:49:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:49:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:49:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:45 compute-0 nova_compute[251290]: 2026-02-02 11:49:45.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:49:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:46 compute-0 nova_compute[251290]: 2026-02-02 11:49:46.012 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:46 compute-0 nova_compute[251290]: 2026-02-02 11:49:46.014 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:46 compute-0 nova_compute[251290]: 2026-02-02 11:49:46.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:46 compute-0 nova_compute[251290]: 2026-02-02 11:49:46.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:49:46 compute-0 nova_compute[251290]: 2026-02-02 11:49:46.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:46 compute-0 nova_compute[251290]: 2026-02-02 11:49:46.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 11:49:46 compute-0 nova_compute[251290]: 2026-02-02 11:49:46.040 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 11:49:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:49:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:46.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:49:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:49:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:46.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:49:46 compute-0 ceph-mon[74676]: pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:46] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:49:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:46] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:49:47 compute-0 nova_compute[251290]: 2026-02-02 11:49:47.040 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:47.229Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:48.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:48.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:48 compute-0 sudo[283714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:49:48 compute-0 sudo[283714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:49:48 compute-0 sudo[283714]: pam_unix(sudo:session): session closed for user root
Feb 02 11:49:48 compute-0 ceph-mon[74676]: pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:48 compute-0 nova_compute[251290]: 2026-02-02 11:49:48.577 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:48.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:49 compute-0 nova_compute[251290]: 2026-02-02 11:49:49.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:49:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:50.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:49:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:50.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:50 compute-0 ceph-mon[74676]: pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:51 compute-0 nova_compute[251290]: 2026-02-02 11:49:51.013 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:51 compute-0 nova_compute[251290]: 2026-02-02 11:49:51.034 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:51 compute-0 nova_compute[251290]: 2026-02-02 11:49:51.035 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:49:51 compute-0 nova_compute[251290]: 2026-02-02 11:49:51.035 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:49:51 compute-0 nova_compute[251290]: 2026-02-02 11:49:51.059 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:49:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:52 compute-0 nova_compute[251290]: 2026-02-02 11:49:52.038 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:49:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:52.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:49:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:52.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:52 compute-0 ceph-mon[74676]: pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:53 compute-0 nova_compute[251290]: 2026-02-02 11:49:53.580 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:54.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:54.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:54 compute-0 ceph-mon[74676]: pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:55 compute-0 nova_compute[251290]: 2026-02-02 11:49:55.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:55 compute-0 nova_compute[251290]: 2026-02-02 11:49:55.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 11:49:55 compute-0 ceph-mon[74676]: pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:49:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:49:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:49:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:49:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:49:56 compute-0 nova_compute[251290]: 2026-02-02 11:49:56.016 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:56.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:56.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:49:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:56] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:49:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:49:56] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:49:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:57.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:57 compute-0 ceph-mon[74676]: pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:49:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:49:58.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:49:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:49:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:49:58.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:49:58 compute-0 nova_compute[251290]: 2026-02-02 11:49:58.584 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:49:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:49:58.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:49:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:49:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:49:59 compute-0 nova_compute[251290]: 2026-02-02 11:49:59.800 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:49:59 compute-0 ceph-mon[74676]: pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:49:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:49:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:49:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:49:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:49:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:49:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:49:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:50:00 compute-0 ceph-mon[74676]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 failed cephadm daemon(s)
Feb 02 11:50:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:00.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:00.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:50:00 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7203 writes, 32K keys, 7203 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 7203 writes, 7203 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1534 writes, 7073 keys, 1534 commit groups, 1.0 writes per commit group, ingest: 11.96 MB, 0.02 MB/s
                                           Interval WAL: 1534 writes, 1534 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    116.9      0.44              0.11        18    0.025       0      0       0.0       0.0
                                             L6      1/0   13.72 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3    134.6    114.7      1.92              0.42        17    0.113     94K   9474       0.0       0.0
                                            Sum      1/0   13.72 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3    109.4    115.1      2.37              0.53        35    0.068     94K   9474       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.8    132.0    135.5      0.49              0.13         8    0.062     26K   2569       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    134.6    114.7      1.92              0.42        17    0.113     94K   9474       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    117.7      0.44              0.11        17    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.051, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.27 GB write, 0.11 MB/s write, 0.25 GB read, 0.11 MB/s read, 2.4 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594e304b350#2 capacity: 304.00 MB usage: 23.04 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000275 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1464,22.29 MB,7.33332%) FilterBlock(36,278.30 KB,0.0893994%) IndexBlock(36,483.89 KB,0.155444%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 11:50:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:00 compute-0 ceph-mon[74676]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Feb 02 11:50:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:01 compute-0 nova_compute[251290]: 2026-02-02 11:50:01.017 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:01 compute-0 podman[283752]: 2026-02-02 11:50:01.25870429 +0000 UTC m=+0.050179940 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:50:01 compute-0 podman[283753]: 2026-02-02 11:50:01.275901303 +0000 UTC m=+0.063919094 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:50:01 compute-0 ceph-mon[74676]: pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:02.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:02.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:03 compute-0 nova_compute[251290]: 2026-02-02 11:50:03.586 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:03 compute-0 ceph-mon[74676]: pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:04.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:04.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:05 compute-0 ceph-mon[74676]: pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:06 compute-0 nova_compute[251290]: 2026-02-02 11:50:06.018 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:50:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:06.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:50:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:50:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:50:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:07.232Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:50:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:07.232Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:50:07 compute-0 ceph-mon[74676]: pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:08.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:50:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:08.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:50:08 compute-0 sudo[283805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:50:08 compute-0 sudo[283805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:08 compute-0 sudo[283805]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:08 compute-0 nova_compute[251290]: 2026-02-02 11:50:08.589 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:08.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:50:09 compute-0 ceph-mon[74676]: pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:10.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:10.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:11 compute-0 nova_compute[251290]: 2026-02-02 11:50:11.022 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:12 compute-0 ceph-mon[74676]: pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:12.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:12.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:13 compute-0 nova_compute[251290]: 2026-02-02 11:50:13.593 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:14 compute-0 ceph-mon[74676]: pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:50:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:50:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:50:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:16 compute-0 nova_compute[251290]: 2026-02-02 11:50:16.025 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:16 compute-0 ceph-mon[74676]: pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:50:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:16.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:50:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:50:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:50:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:17.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:50:18 compute-0 ceph-mon[74676]: pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:18.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:18.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:18 compute-0 nova_compute[251290]: 2026-02-02 11:50:18.596 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:18.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:50:20 compute-0 ceph-mon[74676]: pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:20.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:20.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:21 compute-0 nova_compute[251290]: 2026-02-02 11:50:21.026 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:22 compute-0 ceph-mon[74676]: pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:22.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:22.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:50:22.690 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:50:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:50:22.691 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:50:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:50:22.691 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:50:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:23 compute-0 nova_compute[251290]: 2026-02-02 11:50:23.600 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:24 compute-0 ceph-mon[74676]: pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:24.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:24.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:25 compute-0 sudo[283847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:50:25 compute-0 sudo[283847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:25 compute-0 sudo[283847]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:25 compute-0 sudo[283872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Feb 02 11:50:25 compute-0 sudo[283872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:26 compute-0 nova_compute[251290]: 2026-02-02 11:50:26.028 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:26.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:26 compute-0 ceph-mon[74676]: pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:26 compute-0 podman[283972]: 2026-02-02 11:50:26.145446832 +0000 UTC m=+0.062848069 container exec 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:50:26 compute-0 podman[283972]: 2026-02-02 11:50:26.24414032 +0000 UTC m=+0.161541527 container exec_died 88d564d338f4a17145e5981fd14ad5044c22dcd277531d968916d7d02f7cfddf (image=quay.io/ceph/ceph:v19, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mon-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:50:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:26.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:26 compute-0 podman[284106]: 2026-02-02 11:50:26.739279499 +0000 UTC m=+0.055348383 container exec 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:50:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:26 compute-0 podman[284106]: 2026-02-02 11:50:26.774373308 +0000 UTC m=+0.090442172 container exec_died 4eb3867860a93f5a5626b825265fb9cd4d7588d4505fe05e0df7f1374506e9b9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:50:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:26] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:50:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:26] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:50:27 compute-0 podman[284181]: 2026-02-02 11:50:27.00531591 +0000 UTC m=+0.052158201 container exec d3ddad00564de16556687d2fd72c8c1c0a0fd92bbb83986771142660e53edb15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:50:27 compute-0 podman[284181]: 2026-02-02 11:50:27.015172153 +0000 UTC m=+0.062014424 container exec_died d3ddad00564de16556687d2fd72c8c1c0a0fd92bbb83986771142660e53edb15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:50:27 compute-0 podman[284247]: 2026-02-02 11:50:27.214825215 +0000 UTC m=+0.055043784 container exec 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:50:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:27.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:50:27 compute-0 podman[284247]: 2026-02-02 11:50:27.249231144 +0000 UTC m=+0.089449723 container exec_died 4d4cd8cb5cc80e2d98799a547124c07a0e8cd32711da12567adef46ce7b1a4ed (image=quay.io/ceph/haproxy:2.3, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-haproxy-nfs-cephfs-compute-0-wzpgfa)
Feb 02 11:50:27 compute-0 podman[284314]: 2026-02-02 11:50:27.435639095 +0000 UTC m=+0.052235193 container exec 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, release=1793, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20)
Feb 02 11:50:27 compute-0 podman[284314]: 2026-02-02 11:50:27.4465977 +0000 UTC m=+0.063193798 container exec_died 6c8e1f40841482a3afb98f16321fd109014e7a798210d0e76399fa54e99b2b25 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-keepalived-nfs-cephfs-compute-0-pstbyv, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, name=keepalived, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Feb 02 11:50:27 compute-0 podman[284379]: 2026-02-02 11:50:27.651538424 +0000 UTC m=+0.057265478 container exec ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:50:27 compute-0 podman[284379]: 2026-02-02 11:50:27.681212607 +0000 UTC m=+0.086939641 container exec_died ceee8b9b454f7085aef7aff104b5bb8c48b2052459fb52526899e1cf61ee2e4f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:50:27 compute-0 podman[284451]: 2026-02-02 11:50:27.877444241 +0000 UTC m=+0.057819634 container exec 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:50:28 compute-0 podman[284451]: 2026-02-02 11:50:28.07425321 +0000 UTC m=+0.254628583 container exec_died 8c711cef8796fb166b304dd0f0dcf153e3f9c5b64e8619312eb486fbde4ef0e1 (image=quay.io/ceph/grafana:10.4.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb 02 11:50:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:50:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:28.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:50:28 compute-0 ceph-mon[74676]: pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:28.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:28 compute-0 podman[284562]: 2026-02-02 11:50:28.410281434 +0000 UTC m=+0.060551702 container exec 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:50:28 compute-0 podman[284562]: 2026-02-02 11:50:28.441053529 +0000 UTC m=+0.091323807 container exec_died 8db24ff12b15bd6efaaa4c919a69a90a4e048141245a13d28f3db65e3fb4a14e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1d33f80b-d6ca-501c-bac7-184379b89279-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 02 11:50:28 compute-0 sudo[283872]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:50:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:28 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:50:28 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:28 compute-0 nova_compute[251290]: 2026-02-02 11:50:28.604 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:28 compute-0 sudo[284606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:50:28 compute-0 sudo[284606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:28 compute-0 sudo[284607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:50:28 compute-0 sudo[284607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:28 compute-0 sudo[284606]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:28 compute-0 sudo[284607]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:28 compute-0 sudo[284656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:50:28 compute-0 sudo[284656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:28.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:50:29 compute-0 sudo[284656]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:50:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:50:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:50:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 730 B/s rd, 0 op/s
Feb 02 11:50:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:50:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:50:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:50:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:50:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:50:29 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:50:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:50:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:50:29 compute-0 sudo[284713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:50:29 compute-0 sudo[284713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:29 compute-0 sudo[284713]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:29 compute-0 sudo[284738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:50:29 compute-0 sudo[284738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:50:29
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['images', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.data']
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:50:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:50:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:50:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:50:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:50:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:50:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:50:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:50:29 compute-0 podman[284807]: 2026-02-02 11:50:29.786306975 +0000 UTC m=+0.047779345 container create d0adb4874026a5bbc5294c608d4a686031724f26e93eb4d58c46a9e0898231ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:50:29 compute-0 systemd[1]: Started libpod-conmon-d0adb4874026a5bbc5294c608d4a686031724f26e93eb4d58c46a9e0898231ec.scope.
Feb 02 11:50:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:50:29 compute-0 podman[284807]: 2026-02-02 11:50:29.764146067 +0000 UTC m=+0.025618457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:50:29 compute-0 podman[284807]: 2026-02-02 11:50:29.866034707 +0000 UTC m=+0.127507107 container init d0adb4874026a5bbc5294c608d4a686031724f26e93eb4d58c46a9e0898231ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 11:50:29 compute-0 podman[284807]: 2026-02-02 11:50:29.873838862 +0000 UTC m=+0.135311232 container start d0adb4874026a5bbc5294c608d4a686031724f26e93eb4d58c46a9e0898231ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_satoshi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:50:29 compute-0 podman[284807]: 2026-02-02 11:50:29.878539587 +0000 UTC m=+0.140011987 container attach d0adb4874026a5bbc5294c608d4a686031724f26e93eb4d58c46a9e0898231ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_satoshi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:50:29 compute-0 practical_satoshi[284824]: 167 167
Feb 02 11:50:29 compute-0 systemd[1]: libpod-d0adb4874026a5bbc5294c608d4a686031724f26e93eb4d58c46a9e0898231ec.scope: Deactivated successfully.
Feb 02 11:50:29 compute-0 podman[284807]: 2026-02-02 11:50:29.879875756 +0000 UTC m=+0.141348126 container died d0adb4874026a5bbc5294c608d4a686031724f26e93eb4d58c46a9e0898231ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_satoshi, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c80767ab3a4f49098d0dd870974ad9098546dcb5dbdbd381635224722195946-merged.mount: Deactivated successfully.
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:50:29 compute-0 podman[284807]: 2026-02-02 11:50:29.928047181 +0000 UTC m=+0.189519551 container remove d0adb4874026a5bbc5294c608d4a686031724f26e93eb4d58c46a9e0898231ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb 02 11:50:29 compute-0 systemd[1]: libpod-conmon-d0adb4874026a5bbc5294c608d4a686031724f26e93eb4d58c46a9e0898231ec.scope: Deactivated successfully.
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:50:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:50:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:30.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:30 compute-0 podman[284849]: 2026-02-02 11:50:30.044375546 +0000 UTC m=+0.024489835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:50:30 compute-0 podman[284849]: 2026-02-02 11:50:30.277238753 +0000 UTC m=+0.257353012 container create f38a52c71c94b744d49d0969e34d5c6a8ca43054c50b7444cffce768dba1aa49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:50:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:50:30 compute-0 systemd[1]: Started libpod-conmon-f38a52c71c94b744d49d0969e34d5c6a8ca43054c50b7444cffce768dba1aa49.scope.
Feb 02 11:50:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:30.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27a2ce85bd5562237feef8311be5bce72eae48ddb257935c6d95d9fd2f76486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27a2ce85bd5562237feef8311be5bce72eae48ddb257935c6d95d9fd2f76486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27a2ce85bd5562237feef8311be5bce72eae48ddb257935c6d95d9fd2f76486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27a2ce85bd5562237feef8311be5bce72eae48ddb257935c6d95d9fd2f76486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27a2ce85bd5562237feef8311be5bce72eae48ddb257935c6d95d9fd2f76486/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:30 compute-0 podman[284849]: 2026-02-02 11:50:30.507547757 +0000 UTC m=+0.487662006 container init f38a52c71c94b744d49d0969e34d5c6a8ca43054c50b7444cffce768dba1aa49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:50:30 compute-0 podman[284849]: 2026-02-02 11:50:30.51425695 +0000 UTC m=+0.494371209 container start f38a52c71c94b744d49d0969e34d5c6a8ca43054c50b7444cffce768dba1aa49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:50:30 compute-0 podman[284849]: 2026-02-02 11:50:30.577609861 +0000 UTC m=+0.557724120 container attach f38a52c71c94b744d49d0969e34d5c6a8ca43054c50b7444cffce768dba1aa49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:50:30 compute-0 ceph-mon[74676]: pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:30 compute-0 ceph-mon[74676]: pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:50:30 compute-0 ceph-mon[74676]: pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 730 B/s rd, 0 op/s
Feb 02 11:50:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:50:30 compute-0 dazzling_leavitt[284866]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:50:30 compute-0 dazzling_leavitt[284866]: --> All data devices are unavailable
Feb 02 11:50:30 compute-0 systemd[1]: libpod-f38a52c71c94b744d49d0969e34d5c6a8ca43054c50b7444cffce768dba1aa49.scope: Deactivated successfully.
Feb 02 11:50:30 compute-0 podman[284881]: 2026-02-02 11:50:30.918622988 +0000 UTC m=+0.026923665 container died f38a52c71c94b744d49d0969e34d5c6a8ca43054c50b7444cffce768dba1aa49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 11:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d27a2ce85bd5562237feef8311be5bce72eae48ddb257935c6d95d9fd2f76486-merged.mount: Deactivated successfully.
Feb 02 11:50:30 compute-0 podman[284881]: 2026-02-02 11:50:30.957016012 +0000 UTC m=+0.065316659 container remove f38a52c71c94b744d49d0969e34d5c6a8ca43054c50b7444cffce768dba1aa49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:50:30 compute-0 systemd[1]: libpod-conmon-f38a52c71c94b744d49d0969e34d5c6a8ca43054c50b7444cffce768dba1aa49.scope: Deactivated successfully.
Feb 02 11:50:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:31 compute-0 sudo[284738]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:31 compute-0 nova_compute[251290]: 2026-02-02 11:50:31.031 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:31 compute-0 sudo[284897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:50:31 compute-0 sudo[284897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:31 compute-0 sudo[284897]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:31 compute-0 sudo[284922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:50:31 compute-0 sudo[284922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Feb 02 11:50:31 compute-0 podman[284988]: 2026-02-02 11:50:31.518795718 +0000 UTC m=+0.041575287 container create 4efbbf358988eb361b1fd94234386dc576984f73999105336e1ddf8cf3e6072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:50:31 compute-0 systemd[1]: Started libpod-conmon-4efbbf358988eb361b1fd94234386dc576984f73999105336e1ddf8cf3e6072f.scope.
Feb 02 11:50:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:50:31 compute-0 podman[284988]: 2026-02-02 11:50:31.583136108 +0000 UTC m=+0.105915477 container init 4efbbf358988eb361b1fd94234386dc576984f73999105336e1ddf8cf3e6072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:50:31 compute-0 podman[284988]: 2026-02-02 11:50:31.590609423 +0000 UTC m=+0.113388772 container start 4efbbf358988eb361b1fd94234386dc576984f73999105336e1ddf8cf3e6072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:50:31 compute-0 podman[284988]: 2026-02-02 11:50:31.594206886 +0000 UTC m=+0.116986265 container attach 4efbbf358988eb361b1fd94234386dc576984f73999105336e1ddf8cf3e6072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_noyce, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:50:31 compute-0 pedantic_noyce[285006]: 167 167
Feb 02 11:50:31 compute-0 systemd[1]: libpod-4efbbf358988eb361b1fd94234386dc576984f73999105336e1ddf8cf3e6072f.scope: Deactivated successfully.
Feb 02 11:50:31 compute-0 podman[284988]: 2026-02-02 11:50:31.501211922 +0000 UTC m=+0.023991291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:50:31 compute-0 podman[284988]: 2026-02-02 11:50:31.599849159 +0000 UTC m=+0.122628508 container died 4efbbf358988eb361b1fd94234386dc576984f73999105336e1ddf8cf3e6072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:50:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a756a98130ab0ff0f53704b11e4da52c7d18ef0ad778a043d574871fe04e05f-merged.mount: Deactivated successfully.
Feb 02 11:50:31 compute-0 podman[285002]: 2026-02-02 11:50:31.63188752 +0000 UTC m=+0.068825040 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:50:31 compute-0 podman[284988]: 2026-02-02 11:50:31.637887933 +0000 UTC m=+0.160667322 container remove 4efbbf358988eb361b1fd94234386dc576984f73999105336e1ddf8cf3e6072f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_noyce, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:50:31 compute-0 systemd[1]: libpod-conmon-4efbbf358988eb361b1fd94234386dc576984f73999105336e1ddf8cf3e6072f.scope: Deactivated successfully.
Feb 02 11:50:31 compute-0 podman[285005]: 2026-02-02 11:50:31.660875044 +0000 UTC m=+0.098582646 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:50:31 compute-0 podman[285074]: 2026-02-02 11:50:31.77167981 +0000 UTC m=+0.041496254 container create 9e5dddbfa5b55b9b63c440bed08945afa8b111d1f38f1e7f756132478d273e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:50:31 compute-0 systemd[1]: Started libpod-conmon-9e5dddbfa5b55b9b63c440bed08945afa8b111d1f38f1e7f756132478d273e5e.scope.
Feb 02 11:50:31 compute-0 podman[285074]: 2026-02-02 11:50:31.754836596 +0000 UTC m=+0.024653060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:50:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1712d29b1c37f103ea3e96f739bb0f099fb6b65bdd982166cc5a5ef23409126c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1712d29b1c37f103ea3e96f739bb0f099fb6b65bdd982166cc5a5ef23409126c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1712d29b1c37f103ea3e96f739bb0f099fb6b65bdd982166cc5a5ef23409126c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1712d29b1c37f103ea3e96f739bb0f099fb6b65bdd982166cc5a5ef23409126c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:31 compute-0 podman[285074]: 2026-02-02 11:50:31.870350768 +0000 UTC m=+0.140167242 container init 9e5dddbfa5b55b9b63c440bed08945afa8b111d1f38f1e7f756132478d273e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:50:31 compute-0 podman[285074]: 2026-02-02 11:50:31.877926426 +0000 UTC m=+0.147742870 container start 9e5dddbfa5b55b9b63c440bed08945afa8b111d1f38f1e7f756132478d273e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:50:31 compute-0 podman[285074]: 2026-02-02 11:50:31.881923981 +0000 UTC m=+0.151740445 container attach 9e5dddbfa5b55b9b63c440bed08945afa8b111d1f38f1e7f756132478d273e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_noyce, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:50:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:50:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:32.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:50:32 compute-0 serene_noyce[285091]: {
Feb 02 11:50:32 compute-0 serene_noyce[285091]:     "1": [
Feb 02 11:50:32 compute-0 serene_noyce[285091]:         {
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "devices": [
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "/dev/loop3"
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             ],
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "lv_name": "ceph_lv0",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "lv_size": "21470642176",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "name": "ceph_lv0",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "tags": {
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.cluster_name": "ceph",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.crush_device_class": "",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.encrypted": "0",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.osd_id": "1",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.type": "block",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.vdo": "0",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:                 "ceph.with_tpm": "0"
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             },
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "type": "block",
Feb 02 11:50:32 compute-0 serene_noyce[285091]:             "vg_name": "ceph_vg0"
Feb 02 11:50:32 compute-0 serene_noyce[285091]:         }
Feb 02 11:50:32 compute-0 serene_noyce[285091]:     ]
Feb 02 11:50:32 compute-0 serene_noyce[285091]: }
Feb 02 11:50:32 compute-0 systemd[1]: libpod-9e5dddbfa5b55b9b63c440bed08945afa8b111d1f38f1e7f756132478d273e5e.scope: Deactivated successfully.
Feb 02 11:50:32 compute-0 podman[285074]: 2026-02-02 11:50:32.178658724 +0000 UTC m=+0.448475158 container died 9e5dddbfa5b55b9b63c440bed08945afa8b111d1f38f1e7f756132478d273e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1712d29b1c37f103ea3e96f739bb0f099fb6b65bdd982166cc5a5ef23409126c-merged.mount: Deactivated successfully.
Feb 02 11:50:32 compute-0 podman[285074]: 2026-02-02 11:50:32.222310809 +0000 UTC m=+0.492127253 container remove 9e5dddbfa5b55b9b63c440bed08945afa8b111d1f38f1e7f756132478d273e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_noyce, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:50:32 compute-0 systemd[1]: libpod-conmon-9e5dddbfa5b55b9b63c440bed08945afa8b111d1f38f1e7f756132478d273e5e.scope: Deactivated successfully.
Feb 02 11:50:32 compute-0 sudo[284922]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:32 compute-0 sudo[285113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:50:32 compute-0 sudo[285113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:32 compute-0 sudo[285113]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:32.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:32 compute-0 sudo[285138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:50:32 compute-0 sudo[285138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:32 compute-0 ceph-mon[74676]: pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Feb 02 11:50:32 compute-0 podman[285202]: 2026-02-02 11:50:32.763914225 +0000 UTC m=+0.042014360 container create c1bf2a5661bfb4e4c5c850568f21f536c603857600e6e951584ff57431fc291d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goldstine, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:50:32 compute-0 systemd[1]: Started libpod-conmon-c1bf2a5661bfb4e4c5c850568f21f536c603857600e6e951584ff57431fc291d.scope.
Feb 02 11:50:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:50:32 compute-0 podman[285202]: 2026-02-02 11:50:32.835267817 +0000 UTC m=+0.113367982 container init c1bf2a5661bfb4e4c5c850568f21f536c603857600e6e951584ff57431fc291d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goldstine, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:50:32 compute-0 podman[285202]: 2026-02-02 11:50:32.744683312 +0000 UTC m=+0.022783467 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:50:32 compute-0 podman[285202]: 2026-02-02 11:50:32.842152405 +0000 UTC m=+0.120252540 container start c1bf2a5661bfb4e4c5c850568f21f536c603857600e6e951584ff57431fc291d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goldstine, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:50:32 compute-0 podman[285202]: 2026-02-02 11:50:32.846276473 +0000 UTC m=+0.124376638 container attach c1bf2a5661bfb4e4c5c850568f21f536c603857600e6e951584ff57431fc291d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb 02 11:50:32 compute-0 ecstatic_goldstine[285219]: 167 167
Feb 02 11:50:32 compute-0 systemd[1]: libpod-c1bf2a5661bfb4e4c5c850568f21f536c603857600e6e951584ff57431fc291d.scope: Deactivated successfully.
Feb 02 11:50:32 compute-0 conmon[285219]: conmon c1bf2a5661bfb4e4c5c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1bf2a5661bfb4e4c5c850568f21f536c603857600e6e951584ff57431fc291d.scope/container/memory.events
Feb 02 11:50:32 compute-0 podman[285202]: 2026-02-02 11:50:32.849154636 +0000 UTC m=+0.127254781 container died c1bf2a5661bfb4e4c5c850568f21f536c603857600e6e951584ff57431fc291d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-00f5639617c18a09d8504335fcaa90201a35bdda5684089f686e95629236b6d0-merged.mount: Deactivated successfully.
Feb 02 11:50:32 compute-0 podman[285202]: 2026-02-02 11:50:32.89105322 +0000 UTC m=+0.169153355 container remove c1bf2a5661bfb4e4c5c850568f21f536c603857600e6e951584ff57431fc291d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:50:32 compute-0 systemd[1]: libpod-conmon-c1bf2a5661bfb4e4c5c850568f21f536c603857600e6e951584ff57431fc291d.scope: Deactivated successfully.
Feb 02 11:50:33 compute-0 podman[285243]: 2026-02-02 11:50:33.041016483 +0000 UTC m=+0.054900360 container create 7c5142ca95af1cea29d1ad15180827956b3218ad8dba354073889b3b8df98c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khayyam, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:50:33 compute-0 systemd[1]: Started libpod-conmon-7c5142ca95af1cea29d1ad15180827956b3218ad8dba354073889b3b8df98c8c.scope.
Feb 02 11:50:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa125abcb1ecd5e5ae61aa30b30b868ec0b246aab34d35bfebb1c5ac4b979e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa125abcb1ecd5e5ae61aa30b30b868ec0b246aab34d35bfebb1c5ac4b979e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa125abcb1ecd5e5ae61aa30b30b868ec0b246aab34d35bfebb1c5ac4b979e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa125abcb1ecd5e5ae61aa30b30b868ec0b246aab34d35bfebb1c5ac4b979e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:50:33 compute-0 podman[285243]: 2026-02-02 11:50:33.013668656 +0000 UTC m=+0.027552343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:50:33 compute-0 podman[285243]: 2026-02-02 11:50:33.122385133 +0000 UTC m=+0.136268810 container init 7c5142ca95af1cea29d1ad15180827956b3218ad8dba354073889b3b8df98c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:50:33 compute-0 podman[285243]: 2026-02-02 11:50:33.130213878 +0000 UTC m=+0.144097545 container start 7c5142ca95af1cea29d1ad15180827956b3218ad8dba354073889b3b8df98c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:50:33 compute-0 podman[285243]: 2026-02-02 11:50:33.133914244 +0000 UTC m=+0.147797911 container attach 7c5142ca95af1cea29d1ad15180827956b3218ad8dba354073889b3b8df98c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khayyam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:50:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Feb 02 11:50:33 compute-0 nova_compute[251290]: 2026-02-02 11:50:33.607 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:33 compute-0 lvm[285333]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:50:33 compute-0 lvm[285333]: VG ceph_vg0 finished
Feb 02 11:50:33 compute-0 gifted_khayyam[285258]: {}
Feb 02 11:50:33 compute-0 systemd[1]: libpod-7c5142ca95af1cea29d1ad15180827956b3218ad8dba354073889b3b8df98c8c.scope: Deactivated successfully.
Feb 02 11:50:33 compute-0 podman[285243]: 2026-02-02 11:50:33.812708905 +0000 UTC m=+0.826592572 container died 7c5142ca95af1cea29d1ad15180827956b3218ad8dba354073889b3b8df98c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:50:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-faa125abcb1ecd5e5ae61aa30b30b868ec0b246aab34d35bfebb1c5ac4b979e3-merged.mount: Deactivated successfully.
Feb 02 11:50:33 compute-0 podman[285243]: 2026-02-02 11:50:33.857472582 +0000 UTC m=+0.871356259 container remove 7c5142ca95af1cea29d1ad15180827956b3218ad8dba354073889b3b8df98c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khayyam, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb 02 11:50:33 compute-0 systemd[1]: libpod-conmon-7c5142ca95af1cea29d1ad15180827956b3218ad8dba354073889b3b8df98c8c.scope: Deactivated successfully.
Feb 02 11:50:33 compute-0 sudo[285138]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:50:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:33 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:50:33 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:33 compute-0 sudo[285348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:50:34 compute-0 sudo[285348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:34 compute-0 sudo[285348]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:34.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:34.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:34 compute-0 ceph-mon[74676]: pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Feb 02 11:50:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:34 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:50:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 730 B/s rd, 0 op/s
Feb 02 11:50:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:36 compute-0 nova_compute[251290]: 2026-02-02 11:50:36.034 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:36.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:36.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:36 compute-0 ceph-mon[74676]: pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 730 B/s rd, 0 op/s
Feb 02 11:50:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-crash-compute-0[80130]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Feb 02 11:50:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:36] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:50:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:36] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:50:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Feb 02 11:50:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:37.236Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:50:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:37.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:50:37 compute-0 ceph-mon[74676]: pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Feb 02 11:50:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:38.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:38.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:38 compute-0 nova_compute[251290]: 2026-02-02 11:50:38.611 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:38.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:50:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:38.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:50:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:50:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:40.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:40 compute-0 ceph-mon[74676]: pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:50:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:40.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:41 compute-0 nova_compute[251290]: 2026-02-02 11:50:41.036 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:41 compute-0 sshd-session[285379]: Invalid user lighthouse from 80.94.92.186 port 40610
Feb 02 11:50:41 compute-0 sshd-session[285379]: Connection closed by invalid user lighthouse 80.94.92.186 port 40610 [preauth]
Feb 02 11:50:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.057 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.058 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.058 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.058 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.058 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:50:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:42.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:42 compute-0 ceph-mon[74676]: pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3645119042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:50:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3681216919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:50:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:42.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:50:42 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408589382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.516 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.657 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.658 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4453MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.658 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.658 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.807 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.807 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.855 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing inventories for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.875 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating ProviderTree inventory for provider 92919e7b-7846-4645-9401-9fd55bbbf435 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.876 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.897 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing aggregate associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.923 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing trait associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 11:50:42 compute-0 nova_compute[251290]: 2026-02-02 11:50:42.953 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:50:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1827656927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:50:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/408589382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:50:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1273032468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:50:43 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:50:43 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/788938554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:50:43 compute-0 nova_compute[251290]: 2026-02-02 11:50:43.413 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:50:43 compute-0 nova_compute[251290]: 2026-02-02 11:50:43.419 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:50:43 compute-0 nova_compute[251290]: 2026-02-02 11:50:43.440 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:50:43 compute-0 nova_compute[251290]: 2026-02-02 11:50:43.441 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:50:43 compute-0 nova_compute[251290]: 2026-02-02 11:50:43.442 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:50:43 compute-0 nova_compute[251290]: 2026-02-02 11:50:43.615 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:50:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1971741352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:50:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:50:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1971741352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:50:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:44.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:44 compute-0 ceph-mon[74676]: pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/788938554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:50:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1971741352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:50:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/1971741352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:50:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:44.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:44 compute-0 nova_compute[251290]: 2026-02-02 11:50:44.442 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:50:44 compute-0 nova_compute[251290]: 2026-02-02 11:50:44.443 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:50:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:50:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:50:45 compute-0 nova_compute[251290]: 2026-02-02 11:50:45.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:50:45 compute-0 nova_compute[251290]: 2026-02-02 11:50:45.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:50:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:50:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:46 compute-0 nova_compute[251290]: 2026-02-02 11:50:46.039 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:46.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:46 compute-0 ceph-mon[74676]: pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:46.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:46] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:50:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:46] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:50:47 compute-0 nova_compute[251290]: 2026-02-02 11:50:47.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:50:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:47.236Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:50:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:47.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:50:48 compute-0 nova_compute[251290]: 2026-02-02 11:50:48.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:50:48 compute-0 nova_compute[251290]: 2026-02-02 11:50:48.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:50:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:48.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:48 compute-0 ceph-mon[74676]: pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:48.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:48 compute-0 nova_compute[251290]: 2026-02-02 11:50:48.621 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:48 compute-0 sudo[285433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:50:48 compute-0 sudo[285433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:50:48 compute-0 sudo[285433]: pam_unix(sudo:session): session closed for user root
Feb 02 11:50:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:48.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:50:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:48.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:50:49 compute-0 nova_compute[251290]: 2026-02-02 11:50:49.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:50:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:50.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:50 compute-0 ceph-mon[74676]: pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:50.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:51 compute-0 nova_compute[251290]: 2026-02-02 11:50:51.041 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:52.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:52 compute-0 ceph-mon[74676]: pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:52.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:53 compute-0 nova_compute[251290]: 2026-02-02 11:50:53.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:50:53 compute-0 nova_compute[251290]: 2026-02-02 11:50:53.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:50:53 compute-0 nova_compute[251290]: 2026-02-02 11:50:53.021 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:50:53 compute-0 nova_compute[251290]: 2026-02-02 11:50:53.044 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:50:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:53 compute-0 nova_compute[251290]: 2026-02-02 11:50:53.624 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:54.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:54 compute-0 ceph-mon[74676]: pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:54.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:50:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:50:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:50:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:50:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:50:56 compute-0 nova_compute[251290]: 2026-02-02 11:50:56.042 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:56.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:56 compute-0 ceph-mon[74676]: pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:50:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:56.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:50:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:50:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:56] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:50:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:50:56] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:50:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:57.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:50:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:50:58.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:58 compute-0 ceph-mon[74676]: pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:50:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:50:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:50:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:50:58.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:50:58 compute-0 nova_compute[251290]: 2026-02-02 11:50:58.628 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:50:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:50:58.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:50:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:50:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:50:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:50:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:50:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:50:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:50:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:50:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:50:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:51:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:00.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:00.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:00 compute-0 ceph-mon[74676]: pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:51:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:01 compute-0 nova_compute[251290]: 2026-02-02 11:51:01.044 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:02.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:02 compute-0 podman[285473]: 2026-02-02 11:51:02.274160827 +0000 UTC m=+0.056952479 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 02 11:51:02 compute-0 podman[285474]: 2026-02-02 11:51:02.306894788 +0000 UTC m=+0.089899296 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 02 11:51:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:02.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:02 compute-0 ceph-mon[74676]: pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:03 compute-0 nova_compute[251290]: 2026-02-02 11:51:03.631 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:04.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:04.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:04 compute-0 ceph-mon[74676]: pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:06 compute-0 nova_compute[251290]: 2026-02-02 11:51:06.046 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:06.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:06.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:06 compute-0 ceph-mon[74676]: pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:06] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:51:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:06] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:51:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:07.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:51:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:08.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:08.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:08 compute-0 ceph-mon[74676]: pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:08 compute-0 nova_compute[251290]: 2026-02-02 11:51:08.635 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:08 compute-0 sudo[285524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:51:08 compute-0 sudo[285524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:08 compute-0 sudo[285524]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:08.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:51:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:08.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:51:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:08.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:51:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:10.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:10.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:10 compute-0 ceph-mon[74676]: pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:11 compute-0 nova_compute[251290]: 2026-02-02 11:51:11.048 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:12.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:12.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:12 compute-0 ceph-mon[74676]: pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:13 compute-0 nova_compute[251290]: 2026-02-02 11:51:13.639 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:14 compute-0 sshd-session[285552]: Invalid user ubuntu from 203.83.238.251 port 48370
Feb 02 11:51:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:14.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:14 compute-0 sshd-session[285552]: Received disconnect from 203.83.238.251 port 48370:11:  [preauth]
Feb 02 11:51:14 compute-0 sshd-session[285552]: Disconnected from invalid user ubuntu 203.83.238.251 port 48370 [preauth]
Feb 02 11:51:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:14.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:14 compute-0 ceph-mon[74676]: pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:51:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:51:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:51:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:16 compute-0 nova_compute[251290]: 2026-02-02 11:51:16.051 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:16.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:16.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:16 compute-0 ceph-mon[74676]: pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:16] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:51:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:16] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:51:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:17.242Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:51:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:17.243Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:51:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:18.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:18.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:18 compute-0 ceph-mon[74676]: pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:18 compute-0 nova_compute[251290]: 2026-02-02 11:51:18.641 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:18.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:51:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:18.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:51:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:20.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:20.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:20 compute-0 ceph-mon[74676]: pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:21 compute-0 nova_compute[251290]: 2026-02-02 11:51:21.053 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:51:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:51:21 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2859 syncs, 3.66 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1587 writes, 5265 keys, 1587 commit groups, 1.0 writes per commit group, ingest: 5.55 MB, 0.01 MB/s
                                           Interval WAL: 1587 writes, 663 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 11:51:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:22.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:22.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:22 compute-0 ceph-mon[74676]: pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:51:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:51:22.691 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:51:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:51:22.692 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:51:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:51:22.692 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:51:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:51:23 compute-0 nova_compute[251290]: 2026-02-02 11:51:23.644 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:24.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:24 compute-0 ceph-mon[74676]: pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:51:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:51:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:26 compute-0 nova_compute[251290]: 2026-02-02 11:51:26.055 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:26.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:26 compute-0 ceph-mon[74676]: pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:51:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:26] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:51:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:26] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:51:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:51:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:27.243Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:51:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:28.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000058s ======
Feb 02 11:51:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:28.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Feb 02 11:51:28 compute-0 nova_compute[251290]: 2026-02-02 11:51:28.647 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:28 compute-0 ceph-mon[74676]: pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:51:28 compute-0 sudo[285571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:51:28 compute-0 sudo[285571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:28 compute-0 sudo[285571]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:28.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:51:29
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.nfs', 'vms', '.mgr', 'images', '.rgw.root', 'default.rgw.log', 'volumes']
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:51:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:51:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:51:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:51:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:51:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:30.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:51:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:51:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:30.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:30 compute-0 ceph-mon[74676]: pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:51:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:31 compute-0 nova_compute[251290]: 2026-02-02 11:51:31.057 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:51:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:32 compute-0 ceph-mon[74676]: pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:51:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:32.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:32.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:33 compute-0 podman[285601]: 2026-02-02 11:51:33.29320865 +0000 UTC m=+0.073903117 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:51:33 compute-0 podman[285602]: 2026-02-02 11:51:33.297469182 +0000 UTC m=+0.076482440 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 02 11:51:33 compute-0 nova_compute[251290]: 2026-02-02 11:51:33.649 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:34.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:34 compute-0 sudo[285646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:51:34 compute-0 sudo[285646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:34 compute-0 sudo[285646]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:34 compute-0 ceph-mon[74676]: pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:34 compute-0 sudo[285671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:51:34 compute-0 sudo[285671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:34.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:34 compute-0 sudo[285671]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:51:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:51:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:51:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:51:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:51:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:51:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:51:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:51:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:51:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:51:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:51:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:51:34 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:51:34 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:51:34 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:51:35 compute-0 sudo[285727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:51:35 compute-0 sudo[285727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:35 compute-0 sudo[285727]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:35 compute-0 sudo[285752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:51:35 compute-0 sudo[285752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:51:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:51:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:51:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:51:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:51:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:51:35 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:51:35 compute-0 podman[285818]: 2026-02-02 11:51:35.480620474 +0000 UTC m=+0.042970367 container create 7fc1e1f5bddb1de635a2c9dbac25fecbfcac4408260d7891f54748dedcf3fcca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:51:35 compute-0 systemd[1]: Started libpod-conmon-7fc1e1f5bddb1de635a2c9dbac25fecbfcac4408260d7891f54748dedcf3fcca.scope.
Feb 02 11:51:35 compute-0 podman[285818]: 2026-02-02 11:51:35.45786107 +0000 UTC m=+0.020210993 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:51:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:51:35 compute-0 podman[285818]: 2026-02-02 11:51:35.574895936 +0000 UTC m=+0.137245869 container init 7fc1e1f5bddb1de635a2c9dbac25fecbfcac4408260d7891f54748dedcf3fcca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_visvesvaraya, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb 02 11:51:35 compute-0 podman[285818]: 2026-02-02 11:51:35.583007349 +0000 UTC m=+0.145357252 container start 7fc1e1f5bddb1de635a2c9dbac25fecbfcac4408260d7891f54748dedcf3fcca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 11:51:35 compute-0 beautiful_visvesvaraya[285834]: 167 167
Feb 02 11:51:35 compute-0 systemd[1]: libpod-7fc1e1f5bddb1de635a2c9dbac25fecbfcac4408260d7891f54748dedcf3fcca.scope: Deactivated successfully.
Feb 02 11:51:35 compute-0 podman[285818]: 2026-02-02 11:51:35.594382066 +0000 UTC m=+0.156731959 container attach 7fc1e1f5bddb1de635a2c9dbac25fecbfcac4408260d7891f54748dedcf3fcca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:51:35 compute-0 podman[285818]: 2026-02-02 11:51:35.595069046 +0000 UTC m=+0.157418939 container died 7fc1e1f5bddb1de635a2c9dbac25fecbfcac4408260d7891f54748dedcf3fcca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_visvesvaraya, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:51:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-afb9470c3ef16cee3f7c93413786b78546adb32cdb75cbd720e4fc02fa513ce1-merged.mount: Deactivated successfully.
Feb 02 11:51:35 compute-0 podman[285818]: 2026-02-02 11:51:35.640748219 +0000 UTC m=+0.203098112 container remove 7fc1e1f5bddb1de635a2c9dbac25fecbfcac4408260d7891f54748dedcf3fcca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_visvesvaraya, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 11:51:35 compute-0 systemd[1]: libpod-conmon-7fc1e1f5bddb1de635a2c9dbac25fecbfcac4408260d7891f54748dedcf3fcca.scope: Deactivated successfully.
Feb 02 11:51:35 compute-0 podman[285860]: 2026-02-02 11:51:35.773400654 +0000 UTC m=+0.042226896 container create 4ed351a7438596090bac3e54719c67f3b780fd94b0b3d19035215fff8e8133c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_volhard, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 11:51:35 compute-0 systemd[1]: Started libpod-conmon-4ed351a7438596090bac3e54719c67f3b780fd94b0b3d19035215fff8e8133c6.scope.
Feb 02 11:51:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89fd0069aacfa4a6505cde0f75f905cb328d04925a20961796ce602d785b779/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89fd0069aacfa4a6505cde0f75f905cb328d04925a20961796ce602d785b779/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89fd0069aacfa4a6505cde0f75f905cb328d04925a20961796ce602d785b779/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89fd0069aacfa4a6505cde0f75f905cb328d04925a20961796ce602d785b779/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89fd0069aacfa4a6505cde0f75f905cb328d04925a20961796ce602d785b779/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:35 compute-0 podman[285860]: 2026-02-02 11:51:35.754674795 +0000 UTC m=+0.023501057 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:51:35 compute-0 podman[285860]: 2026-02-02 11:51:35.856957917 +0000 UTC m=+0.125784179 container init 4ed351a7438596090bac3e54719c67f3b780fd94b0b3d19035215fff8e8133c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:51:35 compute-0 podman[285860]: 2026-02-02 11:51:35.863829975 +0000 UTC m=+0.132656217 container start 4ed351a7438596090bac3e54719c67f3b780fd94b0b3d19035215fff8e8133c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_volhard, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:51:35 compute-0 podman[285860]: 2026-02-02 11:51:35.867203842 +0000 UTC m=+0.136030094 container attach 4ed351a7438596090bac3e54719c67f3b780fd94b0b3d19035215fff8e8133c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_volhard, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:51:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:36 compute-0 nova_compute[251290]: 2026-02-02 11:51:36.059 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:36 compute-0 epic_volhard[285876]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:51:36 compute-0 epic_volhard[285876]: --> All data devices are unavailable
Feb 02 11:51:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:36.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:36 compute-0 systemd[1]: libpod-4ed351a7438596090bac3e54719c67f3b780fd94b0b3d19035215fff8e8133c6.scope: Deactivated successfully.
Feb 02 11:51:36 compute-0 podman[285860]: 2026-02-02 11:51:36.234658869 +0000 UTC m=+0.503485131 container died 4ed351a7438596090bac3e54719c67f3b780fd94b0b3d19035215fff8e8133c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_volhard, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a89fd0069aacfa4a6505cde0f75f905cb328d04925a20961796ce602d785b779-merged.mount: Deactivated successfully.
Feb 02 11:51:36 compute-0 podman[285860]: 2026-02-02 11:51:36.285653186 +0000 UTC m=+0.554479428 container remove 4ed351a7438596090bac3e54719c67f3b780fd94b0b3d19035215fff8e8133c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:51:36 compute-0 systemd[1]: libpod-conmon-4ed351a7438596090bac3e54719c67f3b780fd94b0b3d19035215fff8e8133c6.scope: Deactivated successfully.
Feb 02 11:51:36 compute-0 ceph-mon[74676]: pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:51:36 compute-0 sudo[285752]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:36 compute-0 sudo[285905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:51:36 compute-0 sudo[285905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:36 compute-0 sudo[285905]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:36 compute-0 sudo[285930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:51:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:36.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:36 compute-0 sudo[285930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:36 compute-0 podman[285995]: 2026-02-02 11:51:36.885180167 +0000 UTC m=+0.045037696 container create fd8350e46160d2c77faceb0667cfac14faa860a5f274c63a710850f272d404d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:51:36 compute-0 systemd[1]: Started libpod-conmon-fd8350e46160d2c77faceb0667cfac14faa860a5f274c63a710850f272d404d5.scope.
Feb 02 11:51:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:51:36 compute-0 podman[285995]: 2026-02-02 11:51:36.866930632 +0000 UTC m=+0.026788191 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:51:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:36 compute-0 podman[285995]: 2026-02-02 11:51:36.96838236 +0000 UTC m=+0.128239919 container init fd8350e46160d2c77faceb0667cfac14faa860a5f274c63a710850f272d404d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 11:51:36 compute-0 podman[285995]: 2026-02-02 11:51:36.974809265 +0000 UTC m=+0.134666794 container start fd8350e46160d2c77faceb0667cfac14faa860a5f274c63a710850f272d404d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:51:36 compute-0 nice_leavitt[286011]: 167 167
Feb 02 11:51:36 compute-0 systemd[1]: libpod-fd8350e46160d2c77faceb0667cfac14faa860a5f274c63a710850f272d404d5.scope: Deactivated successfully.
Feb 02 11:51:36 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Feb 02 11:51:36 compute-0 podman[285995]: 2026-02-02 11:51:36.981631011 +0000 UTC m=+0.141488560 container attach fd8350e46160d2c77faceb0667cfac14faa860a5f274c63a710850f272d404d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb 02 11:51:36 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:36.981404) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:51:36 compute-0 podman[285995]: 2026-02-02 11:51:36.982136205 +0000 UTC m=+0.141993734 container died fd8350e46160d2c77faceb0667cfac14faa860a5f274c63a710850f272d404d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:51:36 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Feb 02 11:51:36 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033096981454, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2082, "num_deletes": 506, "total_data_size": 3538073, "memory_usage": 3597056, "flush_reason": "Manual Compaction"}
Feb 02 11:51:36 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Feb 02 11:51:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:36] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:51:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:36] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033097010021, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3452462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31922, "largest_seqno": 34003, "table_properties": {"data_size": 3443467, "index_size": 5105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 20419, "raw_average_key_size": 18, "raw_value_size": 3423484, "raw_average_value_size": 3081, "num_data_blocks": 222, "num_entries": 1111, "num_filter_entries": 1111, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032910, "oldest_key_time": 1770032910, "file_creation_time": 1770033096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 28706 microseconds, and 6576 cpu microseconds.
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.010102) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3452462 bytes OK
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.010134) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.013302) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.013335) EVENT_LOG_v1 {"time_micros": 1770033097013325, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.013367) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3528381, prev total WAL file size 3528381, number of live WAL files 2.
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.014289) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323538' seq:72057594037927935, type:22 .. '6B7600353039' seq:0, type:0; will stop at (end)
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3371KB)], [68(13MB)]
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033097014337, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 17834445, "oldest_snapshot_seqno": -1}
Feb 02 11:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-953a9694b5c4bcb9f2e18d6ac7ca81d1975cfb92f9bc7be970f6b0839f0d59dc-merged.mount: Deactivated successfully.
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6702 keys, 16342768 bytes, temperature: kUnknown
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033097120172, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 16342768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16296100, "index_size": 28828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 174781, "raw_average_key_size": 26, "raw_value_size": 16173380, "raw_average_value_size": 2413, "num_data_blocks": 1148, "num_entries": 6702, "num_filter_entries": 6702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770033097, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.120478) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16342768 bytes
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.121888) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.4 rd, 154.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 13.7 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(9.9) write-amplify(4.7) OK, records in: 7729, records dropped: 1027 output_compression: NoCompression
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.121911) EVENT_LOG_v1 {"time_micros": 1770033097121900, "job": 38, "event": "compaction_finished", "compaction_time_micros": 105925, "compaction_time_cpu_micros": 30671, "output_level": 6, "num_output_files": 1, "total_output_size": 16342768, "num_input_records": 7729, "num_output_records": 6702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033097122408, "job": 38, "event": "table_file_deletion", "file_number": 70}
Feb 02 11:51:37 compute-0 podman[285995]: 2026-02-02 11:51:37.122454741 +0000 UTC m=+0.282312270 container remove fd8350e46160d2c77faceb0667cfac14faa860a5f274c63a710850f272d404d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033097124415, "job": 38, "event": "table_file_deletion", "file_number": 68}
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.014176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.124451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.124458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.124460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.124462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:51:37 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:51:37.124464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:51:37 compute-0 systemd[1]: libpod-conmon-fd8350e46160d2c77faceb0667cfac14faa860a5f274c63a710850f272d404d5.scope: Deactivated successfully.
Feb 02 11:51:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:37.243Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:51:37 compute-0 podman[286038]: 2026-02-02 11:51:37.267356058 +0000 UTC m=+0.040664191 container create 9cf4060720c46430e88ba059f07066d73859e0e743e5edc284c96c79db03d15e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:51:37 compute-0 systemd[1]: Started libpod-conmon-9cf4060720c46430e88ba059f07066d73859e0e743e5edc284c96c79db03d15e.scope.
Feb 02 11:51:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743cf1b87876416e86d1200bb851c94363793f50a8e7c9ef4f9e95cf2614eb90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743cf1b87876416e86d1200bb851c94363793f50a8e7c9ef4f9e95cf2614eb90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743cf1b87876416e86d1200bb851c94363793f50a8e7c9ef4f9e95cf2614eb90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743cf1b87876416e86d1200bb851c94363793f50a8e7c9ef4f9e95cf2614eb90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:37 compute-0 podman[286038]: 2026-02-02 11:51:37.250651307 +0000 UTC m=+0.023959460 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:51:37 compute-0 podman[286038]: 2026-02-02 11:51:37.346834502 +0000 UTC m=+0.120142645 container init 9cf4060720c46430e88ba059f07066d73859e0e743e5edc284c96c79db03d15e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:51:37 compute-0 podman[286038]: 2026-02-02 11:51:37.353243797 +0000 UTC m=+0.126551930 container start 9cf4060720c46430e88ba059f07066d73859e0e743e5edc284c96c79db03d15e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb 02 11:51:37 compute-0 podman[286038]: 2026-02-02 11:51:37.357085867 +0000 UTC m=+0.130394030 container attach 9cf4060720c46430e88ba059f07066d73859e0e743e5edc284c96c79db03d15e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]: {
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:     "1": [
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:         {
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "devices": [
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "/dev/loop3"
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             ],
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "lv_name": "ceph_lv0",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "lv_size": "21470642176",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "name": "ceph_lv0",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "tags": {
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.cluster_name": "ceph",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.crush_device_class": "",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.encrypted": "0",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.osd_id": "1",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.type": "block",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.vdo": "0",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:                 "ceph.with_tpm": "0"
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             },
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "type": "block",
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:             "vg_name": "ceph_vg0"
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:         }
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]:     ]
Feb 02 11:51:37 compute-0 sad_goldwasser[286055]: }
Feb 02 11:51:37 compute-0 systemd[1]: libpod-9cf4060720c46430e88ba059f07066d73859e0e743e5edc284c96c79db03d15e.scope: Deactivated successfully.
Feb 02 11:51:37 compute-0 podman[286038]: 2026-02-02 11:51:37.659135543 +0000 UTC m=+0.432443676 container died 9cf4060720c46430e88ba059f07066d73859e0e743e5edc284c96c79db03d15e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Feb 02 11:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-743cf1b87876416e86d1200bb851c94363793f50a8e7c9ef4f9e95cf2614eb90-merged.mount: Deactivated successfully.
Feb 02 11:51:37 compute-0 podman[286038]: 2026-02-02 11:51:37.701400839 +0000 UTC m=+0.474708982 container remove 9cf4060720c46430e88ba059f07066d73859e0e743e5edc284c96c79db03d15e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:51:37 compute-0 systemd[1]: libpod-conmon-9cf4060720c46430e88ba059f07066d73859e0e743e5edc284c96c79db03d15e.scope: Deactivated successfully.
Feb 02 11:51:37 compute-0 sudo[285930]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:37 compute-0 sudo[286077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:51:37 compute-0 sudo[286077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:37 compute-0 sudo[286077]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:37 compute-0 sudo[286102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:51:37 compute-0 sudo[286102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:38.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:38 compute-0 podman[286166]: 2026-02-02 11:51:38.25085896 +0000 UTC m=+0.040232538 container create f4d9e21a3b1afefe77306335bf26327bc186df581109c2a5cfe490dd9ab15154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_spence, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:51:38 compute-0 systemd[1]: Started libpod-conmon-f4d9e21a3b1afefe77306335bf26327bc186df581109c2a5cfe490dd9ab15154.scope.
Feb 02 11:51:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:51:38 compute-0 podman[286166]: 2026-02-02 11:51:38.231774011 +0000 UTC m=+0.021147589 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:51:38 compute-0 ceph-mon[74676]: pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:38 compute-0 podman[286166]: 2026-02-02 11:51:38.334144095 +0000 UTC m=+0.123517693 container init f4d9e21a3b1afefe77306335bf26327bc186df581109c2a5cfe490dd9ab15154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_spence, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 02 11:51:38 compute-0 podman[286166]: 2026-02-02 11:51:38.342143815 +0000 UTC m=+0.131517393 container start f4d9e21a3b1afefe77306335bf26327bc186df581109c2a5cfe490dd9ab15154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_spence, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 11:51:38 compute-0 podman[286166]: 2026-02-02 11:51:38.346683656 +0000 UTC m=+0.136057464 container attach f4d9e21a3b1afefe77306335bf26327bc186df581109c2a5cfe490dd9ab15154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_spence, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 11:51:38 compute-0 systemd[1]: libpod-f4d9e21a3b1afefe77306335bf26327bc186df581109c2a5cfe490dd9ab15154.scope: Deactivated successfully.
Feb 02 11:51:38 compute-0 cool_spence[286182]: 167 167
Feb 02 11:51:38 compute-0 conmon[286182]: conmon f4d9e21a3b1afefe7730 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4d9e21a3b1afefe77306335bf26327bc186df581109c2a5cfe490dd9ab15154.scope/container/memory.events
Feb 02 11:51:38 compute-0 podman[286166]: 2026-02-02 11:51:38.349791885 +0000 UTC m=+0.139165473 container died f4d9e21a3b1afefe77306335bf26327bc186df581109c2a5cfe490dd9ab15154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_spence, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1fd49096742ca0f5ddde22852f7524fc4b1a30521bc7e9825c0aa55f6c6ad4d-merged.mount: Deactivated successfully.
Feb 02 11:51:38 compute-0 podman[286166]: 2026-02-02 11:51:38.392598626 +0000 UTC m=+0.181972204 container remove f4d9e21a3b1afefe77306335bf26327bc186df581109c2a5cfe490dd9ab15154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_spence, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb 02 11:51:38 compute-0 systemd[1]: libpod-conmon-f4d9e21a3b1afefe77306335bf26327bc186df581109c2a5cfe490dd9ab15154.scope: Deactivated successfully.
Feb 02 11:51:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:38.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:38 compute-0 podman[286207]: 2026-02-02 11:51:38.521358929 +0000 UTC m=+0.035717568 container create eef23a76b797df541585afd9de2b9f5ff66dab70a1208f21c185b93dbd5599ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:51:38 compute-0 systemd[1]: Started libpod-conmon-eef23a76b797df541585afd9de2b9f5ff66dab70a1208f21c185b93dbd5599ed.scope.
Feb 02 11:51:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b3e1bc33e6caded036454a7ff6331ae9c239b925c997d17b5541b63bda8d3cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b3e1bc33e6caded036454a7ff6331ae9c239b925c997d17b5541b63bda8d3cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b3e1bc33e6caded036454a7ff6331ae9c239b925c997d17b5541b63bda8d3cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b3e1bc33e6caded036454a7ff6331ae9c239b925c997d17b5541b63bda8d3cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:51:38 compute-0 podman[286207]: 2026-02-02 11:51:38.596243223 +0000 UTC m=+0.110601902 container init eef23a76b797df541585afd9de2b9f5ff66dab70a1208f21c185b93dbd5599ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:51:38 compute-0 podman[286207]: 2026-02-02 11:51:38.506906224 +0000 UTC m=+0.021264893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:51:38 compute-0 podman[286207]: 2026-02-02 11:51:38.604014956 +0000 UTC m=+0.118373605 container start eef23a76b797df541585afd9de2b9f5ff66dab70a1208f21c185b93dbd5599ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:51:38 compute-0 podman[286207]: 2026-02-02 11:51:38.607840396 +0000 UTC m=+0.122199065 container attach eef23a76b797df541585afd9de2b9f5ff66dab70a1208f21c185b93dbd5599ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 11:51:38 compute-0 nova_compute[251290]: 2026-02-02 11:51:38.682 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:51:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:38.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:51:39 compute-0 lvm[286300]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:51:39 compute-0 lvm[286300]: VG ceph_vg0 finished
Feb 02 11:51:39 compute-0 tender_wilson[286224]: {}
Feb 02 11:51:39 compute-0 systemd[1]: libpod-eef23a76b797df541585afd9de2b9f5ff66dab70a1208f21c185b93dbd5599ed.scope: Deactivated successfully.
Feb 02 11:51:39 compute-0 systemd[1]: libpod-eef23a76b797df541585afd9de2b9f5ff66dab70a1208f21c185b93dbd5599ed.scope: Consumed 1.059s CPU time.
Feb 02 11:51:39 compute-0 podman[286207]: 2026-02-02 11:51:39.299365873 +0000 UTC m=+0.813724522 container died eef23a76b797df541585afd9de2b9f5ff66dab70a1208f21c185b93dbd5599ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb 02 11:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b3e1bc33e6caded036454a7ff6331ae9c239b925c997d17b5541b63bda8d3cd-merged.mount: Deactivated successfully.
Feb 02 11:51:39 compute-0 podman[286207]: 2026-02-02 11:51:39.340152216 +0000 UTC m=+0.854510855 container remove eef23a76b797df541585afd9de2b9f5ff66dab70a1208f21c185b93dbd5599ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb 02 11:51:39 compute-0 systemd[1]: libpod-conmon-eef23a76b797df541585afd9de2b9f5ff66dab70a1208f21c185b93dbd5599ed.scope: Deactivated successfully.
Feb 02 11:51:39 compute-0 sudo[286102]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:51:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:51:39 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:51:39 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:51:39 compute-0 sudo[286314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:51:39 compute-0 sudo[286314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:39 compute-0 sudo[286314]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:40.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:40 compute-0 ceph-mon[74676]: pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:51:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:51:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:51:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:40.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:41 compute-0 nova_compute[251290]: 2026-02-02 11:51:41.061 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:42.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:42 compute-0 ceph-mon[74676]: pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3493891547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:51:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1032981488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:51:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:42.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:51:43 compute-0 nova_compute[251290]: 2026-02-02 11:51:43.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:51:43 compute-0 nova_compute[251290]: 2026-02-02 11:51:43.685 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.069 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.069 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.069 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.069 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.070 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:51:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:51:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3593536926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:51:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:51:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3593536926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:51:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:51:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:44.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:51:44 compute-0 sshd-session[286344]: Connection closed by authenticating user root 45.148.10.121 port 41136 [preauth]
Feb 02 11:51:44 compute-0 ceph-mon[74676]: pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:51:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2790464851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:51:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3593536926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:51:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3593536926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:51:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:44.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:51:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286879189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.570 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:51:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:51:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.721 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.722 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4438MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.723 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.723 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.814 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.814 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:51:44 compute-0 nova_compute[251290]: 2026-02-02 11:51:44.832 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:51:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:51:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:51:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2097196651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:51:45 compute-0 nova_compute[251290]: 2026-02-02 11:51:45.357 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:51:45 compute-0 nova_compute[251290]: 2026-02-02 11:51:45.363 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:51:45 compute-0 nova_compute[251290]: 2026-02-02 11:51:45.379 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:51:45 compute-0 nova_compute[251290]: 2026-02-02 11:51:45.381 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:51:45 compute-0 nova_compute[251290]: 2026-02-02 11:51:45.381 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:51:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3286879189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:51:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:51:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1813304277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:51:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2097196651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:51:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:46 compute-0 nova_compute[251290]: 2026-02-02 11:51:46.062 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:46.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:46 compute-0 nova_compute[251290]: 2026-02-02 11:51:46.382 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:51:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:46.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:46 compute-0 ceph-mon[74676]: pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:51:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:46] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:51:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:46] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:51:47 compute-0 nova_compute[251290]: 2026-02-02 11:51:47.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:51:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:47.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:51:48 compute-0 nova_compute[251290]: 2026-02-02 11:51:48.014 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:51:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:48.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:48.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:48 compute-0 ceph-mon[74676]: pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:48 compute-0 nova_compute[251290]: 2026-02-02 11:51:48.688 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:48 compute-0 sshd-session[286394]: Received disconnect from 91.224.92.108 port 25184:11:  [preauth]
Feb 02 11:51:48 compute-0 sshd-session[286394]: Disconnected from authenticating user root 91.224.92.108 port 25184 [preauth]
Feb 02 11:51:48 compute-0 sudo[286396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:51:48 compute-0 sudo[286396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:51:48 compute-0 sudo[286396]: pam_unix(sudo:session): session closed for user root
Feb 02 11:51:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:48.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:51:49 compute-0 nova_compute[251290]: 2026-02-02 11:51:49.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:51:50 compute-0 nova_compute[251290]: 2026-02-02 11:51:50.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:51:50 compute-0 nova_compute[251290]: 2026-02-02 11:51:50.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:51:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:50.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:50.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:50 compute-0 ceph-mon[74676]: pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:51 compute-0 nova_compute[251290]: 2026-02-02 11:51:51.096 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:51:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:52.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:51:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:51:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:52.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:51:52 compute-0 ceph-mon[74676]: pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:53 compute-0 nova_compute[251290]: 2026-02-02 11:51:53.692 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:54 compute-0 nova_compute[251290]: 2026-02-02 11:51:54.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:51:54 compute-0 nova_compute[251290]: 2026-02-02 11:51:54.168 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:51:54 compute-0 nova_compute[251290]: 2026-02-02 11:51:54.168 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:51:54 compute-0 nova_compute[251290]: 2026-02-02 11:51:54.169 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:51:54 compute-0 nova_compute[251290]: 2026-02-02 11:51:54.228 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:51:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:51:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:51:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:54.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:54 compute-0 ceph-mon[74676]: pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:51:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:51:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:51:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:51:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:51:56 compute-0 nova_compute[251290]: 2026-02-02 11:51:56.099 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:56.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:56.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:56 compute-0 ceph-mon[74676]: pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:51:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:56] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:51:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:51:56] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:51:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:57.246Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:51:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:51:58.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:51:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:51:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:51:58.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:51:58 compute-0 ceph-mon[74676]: pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:51:58 compute-0 nova_compute[251290]: 2026-02-02 11:51:58.696 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:51:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:51:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:51:58.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:51:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:51:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:51:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:51:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:51:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:51:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:51:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:51:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:51:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:52:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:00.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:00.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:00 compute-0 ceph-mon[74676]: pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:01 compute-0 nova_compute[251290]: 2026-02-02 11:52:01.100 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:52:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:02.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:52:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:02.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:02 compute-0 ceph-mon[74676]: pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:03 compute-0 nova_compute[251290]: 2026-02-02 11:52:03.700 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:04.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:04 compute-0 podman[286437]: 2026-02-02 11:52:04.269498596 +0000 UTC m=+0.054917521 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb 02 11:52:04 compute-0 podman[286438]: 2026-02-02 11:52:04.317622189 +0000 UTC m=+0.103040424 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 11:52:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:04.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:04 compute-0 ceph-mon[74676]: pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:06 compute-0 nova_compute[251290]: 2026-02-02 11:52:06.102 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:06 compute-0 nova_compute[251290]: 2026-02-02 11:52:06.160 251294 DEBUG oslo_concurrency.processutils [None req-9db34e87-6f2c-4976-8153-ea42fac0d298 5eb9ca9d081d4fe0954a72034c15983d 298a2ae7f4e04d87bebf3a1c7834ef26 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:52:06 compute-0 nova_compute[251290]: 2026-02-02 11:52:06.182 251294 DEBUG oslo_concurrency.processutils [None req-9db34e87-6f2c-4976-8153-ea42fac0d298 5eb9ca9d081d4fe0954a72034c15983d 298a2ae7f4e04d87bebf3a1c7834ef26 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:52:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:06.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:06.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:06 compute-0 ceph-mon[74676]: pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:06] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:52:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:06] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:52:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:07.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.767355) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033127767409, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 514, "num_deletes": 251, "total_data_size": 651794, "memory_usage": 662560, "flush_reason": "Manual Compaction"}
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033127772241, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 644775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34004, "largest_seqno": 34517, "table_properties": {"data_size": 641767, "index_size": 980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6913, "raw_average_key_size": 19, "raw_value_size": 635890, "raw_average_value_size": 1766, "num_data_blocks": 42, "num_entries": 360, "num_filter_entries": 360, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033097, "oldest_key_time": 1770033097, "file_creation_time": 1770033127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 4934 microseconds, and 2838 cpu microseconds.
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.772287) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 644775 bytes OK
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.772318) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.775468) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.775487) EVENT_LOG_v1 {"time_micros": 1770033127775481, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.775514) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 648845, prev total WAL file size 648845, number of live WAL files 2.
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.775929) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(629KB)], [71(15MB)]
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033127775969, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 16987543, "oldest_snapshot_seqno": -1}
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6549 keys, 14790136 bytes, temperature: kUnknown
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033127883784, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 14790136, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14745686, "index_size": 26994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 172307, "raw_average_key_size": 26, "raw_value_size": 14626797, "raw_average_value_size": 2233, "num_data_blocks": 1066, "num_entries": 6549, "num_filter_entries": 6549, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770033127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.884106) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14790136 bytes
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.885506) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.4 rd, 137.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 15.6 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(49.3) write-amplify(22.9) OK, records in: 7062, records dropped: 513 output_compression: NoCompression
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.885527) EVENT_LOG_v1 {"time_micros": 1770033127885517, "job": 40, "event": "compaction_finished", "compaction_time_micros": 107910, "compaction_time_cpu_micros": 29386, "output_level": 6, "num_output_files": 1, "total_output_size": 14790136, "num_input_records": 7062, "num_output_records": 6549, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033127885764, "job": 40, "event": "table_file_deletion", "file_number": 73}
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033127887984, "job": 40, "event": "table_file_deletion", "file_number": 71}
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.775862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.888081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.888088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.888090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.888092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:52:07 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:52:07.888094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:52:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:08.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:08.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:08 compute-0 nova_compute[251290]: 2026-02-02 11:52:08.703 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:08 compute-0 ceph-mon[74676]: pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:08.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:52:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:08.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:52:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:08.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:52:08 compute-0 sudo[286489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:52:08 compute-0 sudo[286489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:09 compute-0 sudo[286489]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:09 compute-0 ceph-mon[74676]: pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:10.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:10.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:11 compute-0 nova_compute[251290]: 2026-02-02 11:52:11.104 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:11 compute-0 nova_compute[251290]: 2026-02-02 11:52:11.944 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:52:11.947 165304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:75:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:8d:e7:8c:ee:76'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 11:52:11 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:52:11.948 165304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 11:52:11 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:12 compute-0 ceph-mon[74676]: pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:12.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:12.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:13 compute-0 nova_compute[251290]: 2026-02-02 11:52:13.706 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:14 compute-0 ceph-mon[74676]: pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:14.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:14.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:52:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:52:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:52:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:16 compute-0 ceph-mon[74676]: pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:16 compute-0 nova_compute[251290]: 2026-02-02 11:52:16.106 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:16.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:16.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:16 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:16] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:52:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:16] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:52:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:17.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:18 compute-0 ceph-mon[74676]: pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:18.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:18 compute-0 nova_compute[251290]: 2026-02-02 11:52:18.709 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:18.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:19 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:52:19.950 165304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4587b97-1121-4d6d-b583-e59641a06362, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 11:52:20 compute-0 ceph-mon[74676]: pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:20.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:20.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:21 compute-0 nova_compute[251290]: 2026-02-02 11:52:21.108 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:21 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:22 compute-0 ceph-mon[74676]: pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:22.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:52:22.693 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:52:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:52:22.694 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:52:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:52:22.694 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:52:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:23 compute-0 nova_compute[251290]: 2026-02-02 11:52:23.712 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:24 compute-0 ceph-mon[74676]: pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:24.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:24.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:26 compute-0 nova_compute[251290]: 2026-02-02 11:52:26.110 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:26 compute-0 ceph-mon[74676]: pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:52:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:26.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:52:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:26.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:26 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:26] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:52:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:26] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:52:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:27.249Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:52:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:27.249Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:28 compute-0 ceph-mon[74676]: pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:28.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:28.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:28 compute-0 nova_compute[251290]: 2026-02-02 11:52:28.715 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:28.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:29 compute-0 sudo[286534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:52:29 compute-0 sudo[286534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:29 compute-0 sudo[286534]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:52:29
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.rgw.root', '.nfs', 'default.rgw.meta', 'vms', 'default.rgw.log', 'backups', 'images', 'volumes', 'cephfs.cephfs.meta']
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:52:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:52:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:52:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:52:30 compute-0 ceph-mon[74676]: pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:52:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:52:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:52:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:30.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:52:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:31 compute-0 nova_compute[251290]: 2026-02-02 11:52:31.112 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:31 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:32 compute-0 ceph-mon[74676]: pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:32.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:33 compute-0 ceph-mgr[74969]: [dashboard INFO request] [192.168.122.100:43782] [POST] [200] [0.002s] [4.0B] [fad48b84-3e54-4071-a21c-097f136111bc] /api/prometheus_receiver
Feb 02 11:52:33 compute-0 nova_compute[251290]: 2026-02-02 11:52:33.718 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:34 compute-0 ceph-mon[74676]: pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:34.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:34.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:35 compute-0 podman[286566]: 2026-02-02 11:52:35.249161027 +0000 UTC m=+0.039278131 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 11:52:35 compute-0 podman[286567]: 2026-02-02 11:52:35.306805745 +0000 UTC m=+0.093605833 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 02 11:52:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:36 compute-0 nova_compute[251290]: 2026-02-02 11:52:36.114 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:36.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:36 compute-0 ceph-mon[74676]: pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:36.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:36 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:36] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:52:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:36] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:52:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:37.250Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:38.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:38 compute-0 ceph-mon[74676]: pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:52:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:38.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:38 compute-0 nova_compute[251290]: 2026-02-02 11:52:38.721 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:38.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:39 compute-0 sudo[286619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:52:39 compute-0 sudo[286619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:39 compute-0 sudo[286619]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:39 compute-0 sudo[286645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:52:39 compute-0 sudo[286645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:40 compute-0 sudo[286645]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:40.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:52:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:52:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:52:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:52:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:52:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:52:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:52:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:52:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:52:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:52:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:52:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:52:40 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:52:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:52:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:52:40 compute-0 sudo[286703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:52:40 compute-0 sudo[286703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:40 compute-0 sudo[286703]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:40 compute-0 ceph-mon[74676]: pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:52:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:52:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:52:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:52:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:52:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:52:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:52:40 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:52:40 compute-0 sudo[286728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:52:40 compute-0 sudo[286728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:40.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:40 compute-0 podman[286790]: 2026-02-02 11:52:40.829868263 +0000 UTC m=+0.038530349 container create b679af677d1462a590d65cc4cc37c0606c2e7e72603c726630123c35d1a762b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:52:40 compute-0 systemd[1]: Started libpod-conmon-b679af677d1462a590d65cc4cc37c0606c2e7e72603c726630123c35d1a762b0.scope.
Feb 02 11:52:40 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:52:40 compute-0 podman[286790]: 2026-02-02 11:52:40.899947048 +0000 UTC m=+0.108609154 container init b679af677d1462a590d65cc4cc37c0606c2e7e72603c726630123c35d1a762b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:52:40 compute-0 podman[286790]: 2026-02-02 11:52:40.907040772 +0000 UTC m=+0.115702858 container start b679af677d1462a590d65cc4cc37c0606c2e7e72603c726630123c35d1a762b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:52:40 compute-0 podman[286790]: 2026-02-02 11:52:40.812945036 +0000 UTC m=+0.021607152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:52:40 compute-0 podman[286790]: 2026-02-02 11:52:40.910644776 +0000 UTC m=+0.119306862 container attach b679af677d1462a590d65cc4cc37c0606c2e7e72603c726630123c35d1a762b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:52:40 compute-0 goofy_booth[286806]: 167 167
Feb 02 11:52:40 compute-0 systemd[1]: libpod-b679af677d1462a590d65cc4cc37c0606c2e7e72603c726630123c35d1a762b0.scope: Deactivated successfully.
Feb 02 11:52:40 compute-0 podman[286790]: 2026-02-02 11:52:40.914443405 +0000 UTC m=+0.123105491 container died b679af677d1462a590d65cc4cc37c0606c2e7e72603c726630123c35d1a762b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d91e9bc2d2fccfa87bae2a226bebab6fdbd1d030196413855c01edb7c95de66-merged.mount: Deactivated successfully.
Feb 02 11:52:40 compute-0 podman[286790]: 2026-02-02 11:52:40.956082843 +0000 UTC m=+0.164744929 container remove b679af677d1462a590d65cc4cc37c0606c2e7e72603c726630123c35d1a762b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:52:40 compute-0 systemd[1]: libpod-conmon-b679af677d1462a590d65cc4cc37c0606c2e7e72603c726630123c35d1a762b0.scope: Deactivated successfully.
Feb 02 11:52:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:41 compute-0 podman[286833]: 2026-02-02 11:52:41.082395965 +0000 UTC m=+0.043946745 container create 31d5332b5274322dd9f0c862b57af16b968c8df017c60341952d8fa2d8f55305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb 02 11:52:41 compute-0 nova_compute[251290]: 2026-02-02 11:52:41.117 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:41 compute-0 systemd[1]: Started libpod-conmon-31d5332b5274322dd9f0c862b57af16b968c8df017c60341952d8fa2d8f55305.scope.
Feb 02 11:52:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566d2463030f5e381677c26085f79938896e21fd37c2131d8bc32b9ab20d8763/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566d2463030f5e381677c26085f79938896e21fd37c2131d8bc32b9ab20d8763/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566d2463030f5e381677c26085f79938896e21fd37c2131d8bc32b9ab20d8763/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566d2463030f5e381677c26085f79938896e21fd37c2131d8bc32b9ab20d8763/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566d2463030f5e381677c26085f79938896e21fd37c2131d8bc32b9ab20d8763/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:41 compute-0 podman[286833]: 2026-02-02 11:52:41.062532524 +0000 UTC m=+0.024083334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:52:41 compute-0 podman[286833]: 2026-02-02 11:52:41.164431844 +0000 UTC m=+0.125982644 container init 31d5332b5274322dd9f0c862b57af16b968c8df017c60341952d8fa2d8f55305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhabha, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 11:52:41 compute-0 podman[286833]: 2026-02-02 11:52:41.172494046 +0000 UTC m=+0.134044826 container start 31d5332b5274322dd9f0c862b57af16b968c8df017c60341952d8fa2d8f55305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhabha, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:52:41 compute-0 podman[286833]: 2026-02-02 11:52:41.177785648 +0000 UTC m=+0.139336698 container attach 31d5332b5274322dd9f0c862b57af16b968c8df017c60341952d8fa2d8f55305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:52:41 compute-0 ceph-mon[74676]: pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:52:41 compute-0 zealous_bhabha[286850]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:52:41 compute-0 zealous_bhabha[286850]: --> All data devices are unavailable
Feb 02 11:52:41 compute-0 systemd[1]: libpod-31d5332b5274322dd9f0c862b57af16b968c8df017c60341952d8fa2d8f55305.scope: Deactivated successfully.
Feb 02 11:52:41 compute-0 podman[286833]: 2026-02-02 11:52:41.49428259 +0000 UTC m=+0.455833370 container died 31d5332b5274322dd9f0c862b57af16b968c8df017c60341952d8fa2d8f55305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhabha, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb 02 11:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-566d2463030f5e381677c26085f79938896e21fd37c2131d8bc32b9ab20d8763-merged.mount: Deactivated successfully.
Feb 02 11:52:41 compute-0 podman[286833]: 2026-02-02 11:52:41.538636175 +0000 UTC m=+0.500186955 container remove 31d5332b5274322dd9f0c862b57af16b968c8df017c60341952d8fa2d8f55305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhabha, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:52:41 compute-0 systemd[1]: libpod-conmon-31d5332b5274322dd9f0c862b57af16b968c8df017c60341952d8fa2d8f55305.scope: Deactivated successfully.
Feb 02 11:52:41 compute-0 sudo[286728]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:41 compute-0 sudo[286875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:52:41 compute-0 sudo[286875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:41 compute-0 sudo[286875]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:41 compute-0 sudo[286900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:52:41 compute-0 sudo[286900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:41 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:42 compute-0 podman[286965]: 2026-02-02 11:52:42.12059302 +0000 UTC m=+0.036359706 container create 17ab6c2f45f812fdca22bda03e65a17f305588af817bf774e17fe93be59dd880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:52:42 compute-0 systemd[1]: Started libpod-conmon-17ab6c2f45f812fdca22bda03e65a17f305588af817bf774e17fe93be59dd880.scope.
Feb 02 11:52:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:52:42 compute-0 podman[286965]: 2026-02-02 11:52:42.191914002 +0000 UTC m=+0.107680708 container init 17ab6c2f45f812fdca22bda03e65a17f305588af817bf774e17fe93be59dd880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dirac, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb 02 11:52:42 compute-0 podman[286965]: 2026-02-02 11:52:42.199365566 +0000 UTC m=+0.115132252 container start 17ab6c2f45f812fdca22bda03e65a17f305588af817bf774e17fe93be59dd880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:52:42 compute-0 podman[286965]: 2026-02-02 11:52:42.106412823 +0000 UTC m=+0.022179529 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:52:42 compute-0 podman[286965]: 2026-02-02 11:52:42.204193165 +0000 UTC m=+0.119959881 container attach 17ab6c2f45f812fdca22bda03e65a17f305588af817bf774e17fe93be59dd880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:52:42 compute-0 silly_dirac[286982]: 167 167
Feb 02 11:52:42 compute-0 systemd[1]: libpod-17ab6c2f45f812fdca22bda03e65a17f305588af817bf774e17fe93be59dd880.scope: Deactivated successfully.
Feb 02 11:52:42 compute-0 podman[286965]: 2026-02-02 11:52:42.20716485 +0000 UTC m=+0.122931536 container died 17ab6c2f45f812fdca22bda03e65a17f305588af817bf774e17fe93be59dd880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb 02 11:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ea5bb995751d7c075cce7222f8b1f116f3c540d295387df5fd9d1a5a6f84d21-merged.mount: Deactivated successfully.
Feb 02 11:52:42 compute-0 podman[286965]: 2026-02-02 11:52:42.24678988 +0000 UTC m=+0.162556566 container remove 17ab6c2f45f812fdca22bda03e65a17f305588af817bf774e17fe93be59dd880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dirac, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:52:42 compute-0 systemd[1]: libpod-conmon-17ab6c2f45f812fdca22bda03e65a17f305588af817bf774e17fe93be59dd880.scope: Deactivated successfully.
Feb 02 11:52:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:42.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:52:42 compute-0 podman[287005]: 2026-02-02 11:52:42.386996242 +0000 UTC m=+0.043758410 container create 689156aad4e9ac43f77a62ff2a1436416fd07bb7777efef0dc3d6e0a2e853211 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_curie, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb 02 11:52:42 compute-0 systemd[1]: Started libpod-conmon-689156aad4e9ac43f77a62ff2a1436416fd07bb7777efef0dc3d6e0a2e853211.scope.
Feb 02 11:52:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/569fc7e1c7caa79cb9a75dcf35fa216853d7a95793726a8fddbb018a219339d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/569fc7e1c7caa79cb9a75dcf35fa216853d7a95793726a8fddbb018a219339d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/569fc7e1c7caa79cb9a75dcf35fa216853d7a95793726a8fddbb018a219339d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/569fc7e1c7caa79cb9a75dcf35fa216853d7a95793726a8fddbb018a219339d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:42 compute-0 podman[287005]: 2026-02-02 11:52:42.461581427 +0000 UTC m=+0.118343615 container init 689156aad4e9ac43f77a62ff2a1436416fd07bb7777efef0dc3d6e0a2e853211 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Feb 02 11:52:42 compute-0 podman[287005]: 2026-02-02 11:52:42.368754687 +0000 UTC m=+0.025516895 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:52:42 compute-0 podman[287005]: 2026-02-02 11:52:42.469414102 +0000 UTC m=+0.126176280 container start 689156aad4e9ac43f77a62ff2a1436416fd07bb7777efef0dc3d6e0a2e853211 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 11:52:42 compute-0 podman[287005]: 2026-02-02 11:52:42.474463257 +0000 UTC m=+0.131225505 container attach 689156aad4e9ac43f77a62ff2a1436416fd07bb7777efef0dc3d6e0a2e853211 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Feb 02 11:52:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:42.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:42 compute-0 agitated_curie[287022]: {
Feb 02 11:52:42 compute-0 agitated_curie[287022]:     "1": [
Feb 02 11:52:42 compute-0 agitated_curie[287022]:         {
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "devices": [
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "/dev/loop3"
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             ],
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "lv_name": "ceph_lv0",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "lv_size": "21470642176",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "name": "ceph_lv0",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "tags": {
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.cluster_name": "ceph",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.crush_device_class": "",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.encrypted": "0",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.osd_id": "1",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.type": "block",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.vdo": "0",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:                 "ceph.with_tpm": "0"
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             },
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "type": "block",
Feb 02 11:52:42 compute-0 agitated_curie[287022]:             "vg_name": "ceph_vg0"
Feb 02 11:52:42 compute-0 agitated_curie[287022]:         }
Feb 02 11:52:42 compute-0 agitated_curie[287022]:     ]
Feb 02 11:52:42 compute-0 agitated_curie[287022]: }
Feb 02 11:52:42 compute-0 systemd[1]: libpod-689156aad4e9ac43f77a62ff2a1436416fd07bb7777efef0dc3d6e0a2e853211.scope: Deactivated successfully.
Feb 02 11:52:42 compute-0 conmon[287022]: conmon 689156aad4e9ac43f77a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-689156aad4e9ac43f77a62ff2a1436416fd07bb7777efef0dc3d6e0a2e853211.scope/container/memory.events
Feb 02 11:52:42 compute-0 podman[287005]: 2026-02-02 11:52:42.770718697 +0000 UTC m=+0.427480865 container died 689156aad4e9ac43f77a62ff2a1436416fd07bb7777efef0dc3d6e0a2e853211 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_curie, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-569fc7e1c7caa79cb9a75dcf35fa216853d7a95793726a8fddbb018a219339d3-merged.mount: Deactivated successfully.
Feb 02 11:52:42 compute-0 podman[287005]: 2026-02-02 11:52:42.812613682 +0000 UTC m=+0.469375850 container remove 689156aad4e9ac43f77a62ff2a1436416fd07bb7777efef0dc3d6e0a2e853211 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:52:42 compute-0 systemd[1]: libpod-conmon-689156aad4e9ac43f77a62ff2a1436416fd07bb7777efef0dc3d6e0a2e853211.scope: Deactivated successfully.
Feb 02 11:52:42 compute-0 sudo[286900]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:42 compute-0 sudo[287043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:52:42 compute-0 sudo[287043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:42 compute-0 sudo[287043]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:42 compute-0 sudo[287068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:52:42 compute-0 sudo[287068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:43 compute-0 podman[287135]: 2026-02-02 11:52:43.344687343 +0000 UTC m=+0.035645546 container create 8d1120b9b5106f6e90ef79892c947012a9c127e94f9e53a722095331a54889ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:52:43 compute-0 systemd[1]: Started libpod-conmon-8d1120b9b5106f6e90ef79892c947012a9c127e94f9e53a722095331a54889ae.scope.
Feb 02 11:52:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:52:43 compute-0 podman[287135]: 2026-02-02 11:52:43.32893285 +0000 UTC m=+0.019891073 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:52:43 compute-0 ceph-mon[74676]: pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:52:43 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1677397946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:52:43 compute-0 podman[287135]: 2026-02-02 11:52:43.428494763 +0000 UTC m=+0.119452986 container init 8d1120b9b5106f6e90ef79892c947012a9c127e94f9e53a722095331a54889ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:52:43 compute-0 podman[287135]: 2026-02-02 11:52:43.436343979 +0000 UTC m=+0.127302202 container start 8d1120b9b5106f6e90ef79892c947012a9c127e94f9e53a722095331a54889ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 11:52:43 compute-0 podman[287135]: 2026-02-02 11:52:43.440298342 +0000 UTC m=+0.131256635 container attach 8d1120b9b5106f6e90ef79892c947012a9c127e94f9e53a722095331a54889ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 11:52:43 compute-0 jovial_goldberg[287151]: 167 167
Feb 02 11:52:43 compute-0 systemd[1]: libpod-8d1120b9b5106f6e90ef79892c947012a9c127e94f9e53a722095331a54889ae.scope: Deactivated successfully.
Feb 02 11:52:43 compute-0 podman[287135]: 2026-02-02 11:52:43.443565966 +0000 UTC m=+0.134524169 container died 8d1120b9b5106f6e90ef79892c947012a9c127e94f9e53a722095331a54889ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-44d9e521aee532acae9bda0c46d5d3b7a7a3f763dd4a3cf3dd908d677209a258-merged.mount: Deactivated successfully.
Feb 02 11:52:43 compute-0 podman[287135]: 2026-02-02 11:52:43.47951287 +0000 UTC m=+0.170471073 container remove 8d1120b9b5106f6e90ef79892c947012a9c127e94f9e53a722095331a54889ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:52:43 compute-0 systemd[1]: libpod-conmon-8d1120b9b5106f6e90ef79892c947012a9c127e94f9e53a722095331a54889ae.scope: Deactivated successfully.
Feb 02 11:52:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:43.562Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:43 compute-0 podman[287176]: 2026-02-02 11:52:43.61651848 +0000 UTC m=+0.044539342 container create 3ff7fc0083652586924b9563f998ed38e1c3c80bd9e4d1de42ca53b9117c5a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:52:43 compute-0 systemd[1]: Started libpod-conmon-3ff7fc0083652586924b9563f998ed38e1c3c80bd9e4d1de42ca53b9117c5a0b.scope.
Feb 02 11:52:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573abc5bfedb9da2af6889db164bfa8cb479defc4fdbfacf5242c4ccbe2632b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573abc5bfedb9da2af6889db164bfa8cb479defc4fdbfacf5242c4ccbe2632b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573abc5bfedb9da2af6889db164bfa8cb479defc4fdbfacf5242c4ccbe2632b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573abc5bfedb9da2af6889db164bfa8cb479defc4fdbfacf5242c4ccbe2632b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:52:43 compute-0 podman[287176]: 2026-02-02 11:52:43.598999136 +0000 UTC m=+0.027020028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:52:43 compute-0 podman[287176]: 2026-02-02 11:52:43.695939934 +0000 UTC m=+0.123960826 container init 3ff7fc0083652586924b9563f998ed38e1c3c80bd9e4d1de42ca53b9117c5a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:52:43 compute-0 podman[287176]: 2026-02-02 11:52:43.700738932 +0000 UTC m=+0.128759794 container start 3ff7fc0083652586924b9563f998ed38e1c3c80bd9e4d1de42ca53b9117c5a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_darwin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:52:43 compute-0 podman[287176]: 2026-02-02 11:52:43.705112228 +0000 UTC m=+0.133133120 container attach 3ff7fc0083652586924b9563f998ed38e1c3c80bd9e4d1de42ca53b9117c5a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:52:43 compute-0 nova_compute[251290]: 2026-02-02 11:52:43.724 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.021 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.052 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:52:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 11:52:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2866951731' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.053 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:52:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 11:52:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2866951731' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.054 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.055 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.056 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:52:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:44.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:52:44 compute-0 lvm[287288]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:52:44 compute-0 lvm[287288]: VG ceph_vg0 finished
Feb 02 11:52:44 compute-0 elated_darwin[287193]: {}
Feb 02 11:52:44 compute-0 systemd[1]: libpod-3ff7fc0083652586924b9563f998ed38e1c3c80bd9e4d1de42ca53b9117c5a0b.scope: Deactivated successfully.
Feb 02 11:52:44 compute-0 systemd[1]: libpod-3ff7fc0083652586924b9563f998ed38e1c3c80bd9e4d1de42ca53b9117c5a0b.scope: Consumed 1.060s CPU time.
Feb 02 11:52:44 compute-0 podman[287176]: 2026-02-02 11:52:44.416785944 +0000 UTC m=+0.844806806 container died 3ff7fc0083652586924b9563f998ed38e1c3c80bd9e4d1de42ca53b9117c5a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:52:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2866951731' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:52:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2866951731' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:52:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2250158771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-573abc5bfedb9da2af6889db164bfa8cb479defc4fdbfacf5242c4ccbe2632b8-merged.mount: Deactivated successfully.
Feb 02 11:52:44 compute-0 podman[287176]: 2026-02-02 11:52:44.467877443 +0000 UTC m=+0.895898315 container remove 3ff7fc0083652586924b9563f998ed38e1c3c80bd9e4d1de42ca53b9117c5a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_darwin, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:52:44 compute-0 systemd[1]: libpod-conmon-3ff7fc0083652586924b9563f998ed38e1c3c80bd9e4d1de42ca53b9117c5a0b.scope: Deactivated successfully.
Feb 02 11:52:44 compute-0 sudo[287068]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:52:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:52:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/254702435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:52:44 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:52:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:52:44 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.561 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:52:44 compute-0 sudo[287305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:52:44 compute-0 sudo[287305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:52:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:52:44 compute-0 sudo[287305]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:44.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.779 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.780 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4405MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.781 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:52:44 compute-0 nova_compute[251290]: 2026-02-02 11:52:44.781 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:52:45 compute-0 nova_compute[251290]: 2026-02-02 11:52:45.031 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:52:45 compute-0 nova_compute[251290]: 2026-02-02 11:52:45.031 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:52:45 compute-0 nova_compute[251290]: 2026-02-02 11:52:45.053 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:52:45 compute-0 radosgw[89826]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Feb 02 11:52:45 compute-0 radosgw[89826]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Feb 02 11:52:45 compute-0 radosgw[89826]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Feb 02 11:52:45 compute-0 radosgw[89826]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Feb 02 11:52:45 compute-0 radosgw[89826]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Feb 02 11:52:45 compute-0 radosgw[89826]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Feb 02 11:52:45 compute-0 radosgw[89826]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Feb 02 11:52:45 compute-0 ceph-mon[74676]: pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:52:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/254702435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:52:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:52:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:52:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:52:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:52:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3499464856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:52:45 compute-0 nova_compute[251290]: 2026-02-02 11:52:45.551 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:52:45 compute-0 nova_compute[251290]: 2026-02-02 11:52:45.558 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:52:45 compute-0 nova_compute[251290]: 2026-02-02 11:52:45.573 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:52:45 compute-0 nova_compute[251290]: 2026-02-02 11:52:45.575 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:52:45 compute-0 nova_compute[251290]: 2026-02-02 11:52:45.575 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:52:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:46 compute-0 nova_compute[251290]: 2026-02-02 11:52:46.119 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:46.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Feb 02 11:52:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3499464856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:52:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3995170386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:52:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:46.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:46] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:52:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:46] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:52:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:47.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:47 compute-0 ceph-mon[74676]: pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Feb 02 11:52:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3058801151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:52:47 compute-0 nova_compute[251290]: 2026-02-02 11:52:47.574 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:52:48 compute-0 nova_compute[251290]: 2026-02-02 11:52:48.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:52:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:48.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 11:52:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:48.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:48 compute-0 nova_compute[251290]: 2026-02-02 11:52:48.727 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:48.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:49 compute-0 nova_compute[251290]: 2026-02-02 11:52:49.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:52:49 compute-0 nova_compute[251290]: 2026-02-02 11:52:49.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:52:49 compute-0 sudo[287357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:52:49 compute-0 sudo[287357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:52:49 compute-0 sudo[287357]: pam_unix(sudo:session): session closed for user root
Feb 02 11:52:49 compute-0 ceph-mon[74676]: pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 11:52:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:50.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 11:52:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:52:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:50.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:52:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:51 compute-0 nova_compute[251290]: 2026-02-02 11:52:51.122 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:51 compute-0 ceph-mon[74676]: pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 11:52:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:52 compute-0 nova_compute[251290]: 2026-02-02 11:52:52.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:52:52 compute-0 nova_compute[251290]: 2026-02-02 11:52:52.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:52:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 0 B/s wr, 151 op/s
Feb 02 11:52:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:52.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:53 compute-0 ceph-mon[74676]: pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 0 B/s wr, 151 op/s
Feb 02 11:52:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:53.565Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:52:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:53.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:53 compute-0 nova_compute[251290]: 2026-02-02 11:52:53.730 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 0 B/s wr, 150 op/s
Feb 02 11:52:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:54.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:54.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:55 compute-0 ceph-mon[74676]: pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 0 B/s wr, 150 op/s
Feb 02 11:52:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:52:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:52:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:52:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:52:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:52:56 compute-0 nova_compute[251290]: 2026-02-02 11:52:56.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:52:56 compute-0 nova_compute[251290]: 2026-02-02 11:52:56.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:52:56 compute-0 nova_compute[251290]: 2026-02-02 11:52:56.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:52:56 compute-0 nova_compute[251290]: 2026-02-02 11:52:56.042 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:52:56 compute-0 nova_compute[251290]: 2026-02-02 11:52:56.124 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 0 B/s wr, 151 op/s
Feb 02 11:52:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:52:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:56.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:52:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:56.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:52:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:56] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:52:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:52:56] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:52:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:57.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:57 compute-0 ceph-mon[74676]: pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 0 B/s wr, 151 op/s
Feb 02 11:52:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Feb 02 11:52:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:52:58.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:52:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:52:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:52:58.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:52:58 compute-0 nova_compute[251290]: 2026-02-02 11:52:58.733 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:52:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:52:58.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:52:59 compute-0 ceph-mon[74676]: pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Feb 02 11:52:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:52:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:52:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:52:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:52:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:52:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:52:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:52:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:53:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Feb 02 11:53:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:00.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:53:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:00.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:01 compute-0 nova_compute[251290]: 2026-02-02 11:53:01.125 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:01 compute-0 ceph-mon[74676]: pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Feb 02 11:53:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Feb 02 11:53:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:02.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:53:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:02.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:53:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:03.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:03 compute-0 ceph-mon[74676]: pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Feb 02 11:53:03 compute-0 nova_compute[251290]: 2026-02-02 11:53:03.737 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:04.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:04.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:05 compute-0 ceph-mon[74676]: pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:06 compute-0 nova_compute[251290]: 2026-02-02 11:53:06.127 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:06 compute-0 podman[287399]: 2026-02-02 11:53:06.272067694 +0000 UTC m=+0.060882852 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 11:53:06 compute-0 podman[287400]: 2026-02-02 11:53:06.296669771 +0000 UTC m=+0.084827280 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 11:53:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:06.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:06.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:06] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:53:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:06] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:53:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:07.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:07 compute-0 ceph-mon[74676]: pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:08.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:08.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:08 compute-0 nova_compute[251290]: 2026-02-02 11:53:08.740 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:08.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:09 compute-0 sudo[287443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:53:09 compute-0 sudo[287443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:09 compute-0 sudo[287443]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:09 compute-0 ceph-mon[74676]: pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:10.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:10.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:11 compute-0 nova_compute[251290]: 2026-02-02 11:53:11.131 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:11 compute-0 ceph-mon[74676]: pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.009458) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033192009505, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 792, "num_deletes": 251, "total_data_size": 1221015, "memory_usage": 1241448, "flush_reason": "Manual Compaction"}
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033192014139, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 783465, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34518, "largest_seqno": 35309, "table_properties": {"data_size": 780051, "index_size": 1194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9181, "raw_average_key_size": 20, "raw_value_size": 772804, "raw_average_value_size": 1752, "num_data_blocks": 52, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033128, "oldest_key_time": 1770033128, "file_creation_time": 1770033192, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 4731 microseconds, and 2433 cpu microseconds.
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.014192) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 783465 bytes OK
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.014214) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.017862) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.017898) EVENT_LOG_v1 {"time_micros": 1770033192017889, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.017925) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1217068, prev total WAL file size 1217068, number of live WAL files 2.
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.018641) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(765KB)], [74(14MB)]
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033192018724, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15573601, "oldest_snapshot_seqno": -1}
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6500 keys, 11898722 bytes, temperature: kUnknown
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033192128076, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11898722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11858624, "index_size": 22772, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 171490, "raw_average_key_size": 26, "raw_value_size": 11744621, "raw_average_value_size": 1806, "num_data_blocks": 891, "num_entries": 6500, "num_filter_entries": 6500, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770033192, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.128374) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11898722 bytes
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.129611) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.3 rd, 108.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 14.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(35.1) write-amplify(15.2) OK, records in: 6990, records dropped: 490 output_compression: NoCompression
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.129696) EVENT_LOG_v1 {"time_micros": 1770033192129623, "job": 42, "event": "compaction_finished", "compaction_time_micros": 109404, "compaction_time_cpu_micros": 22273, "output_level": 6, "num_output_files": 1, "total_output_size": 11898722, "num_input_records": 6990, "num_output_records": 6500, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033192129956, "job": 42, "event": "table_file_deletion", "file_number": 76}
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033192132242, "job": 42, "event": "table_file_deletion", "file_number": 74}
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.018518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.132280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.132287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.132288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.132289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:53:12 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:53:12.132291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:53:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:12.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:12.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:13 compute-0 ceph-mon[74676]: pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:13.567Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:13 compute-0 nova_compute[251290]: 2026-02-02 11:53:13.743 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:14.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:53:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:53:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:53:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:14.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:53:15 compute-0 ceph-mon[74676]: pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:53:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:16 compute-0 nova_compute[251290]: 2026-02-02 11:53:16.132 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:16.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:16.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:16] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:53:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:16] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:53:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:17.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:17 compute-0 ceph-mon[74676]: pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:18.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:18.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:18 compute-0 nova_compute[251290]: 2026-02-02 11:53:18.744 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:18.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:53:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:18.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:53:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:18.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:53:19 compute-0 ceph-mon[74676]: pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:20.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:20.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:21 compute-0 nova_compute[251290]: 2026-02-02 11:53:21.133 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:21 compute-0 ceph-mon[74676]: pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:22.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:53:22.694 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:53:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:53:22.695 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:53:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:53:22.696 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:53:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:22.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:23 compute-0 ceph-mon[74676]: pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:23.568Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:23 compute-0 nova_compute[251290]: 2026-02-02 11:53:23.748 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:53:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:24.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:53:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:25 compute-0 ceph-mon[74676]: pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:26 compute-0 nova_compute[251290]: 2026-02-02 11:53:26.135 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:26.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:26.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:26] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:53:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:26] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:53:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:27.256Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:27 compute-0 ceph-mon[74676]: pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:28.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:28.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:28 compute-0 nova_compute[251290]: 2026-02-02 11:53:28.751 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:28.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:29 compute-0 sudo[287488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:53:29 compute-0 sudo[287488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:29 compute-0 sudo[287488]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:29 compute-0 ceph-mon[74676]: pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:53:29
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'default.rgw.meta', '.nfs']
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:53:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:53:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:53:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:53:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:30.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:53:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:30.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:31 compute-0 nova_compute[251290]: 2026-02-02 11:53:31.137 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:31 compute-0 ceph-mon[74676]: pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:32.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:53:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:32.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:53:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:33.569Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:33 compute-0 ceph-mon[74676]: pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:33 compute-0 nova_compute[251290]: 2026-02-02 11:53:33.754 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:34.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:34.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:35 compute-0 ceph-mon[74676]: pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:36 compute-0 nova_compute[251290]: 2026-02-02 11:53:36.186 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:36.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:53:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:36.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:53:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:36] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:53:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:36] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:53:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:37.256Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:37 compute-0 podman[287521]: 2026-02-02 11:53:37.298186111 +0000 UTC m=+0.085994834 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:53:37 compute-0 podman[287522]: 2026-02-02 11:53:37.316510638 +0000 UTC m=+0.102490908 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 11:53:37 compute-0 ceph-mon[74676]: pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:38.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:38 compute-0 nova_compute[251290]: 2026-02-02 11:53:38.755 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:38.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:38.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:39 compute-0 ceph-mon[74676]: pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:40.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:40.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:40 compute-0 ceph-mon[74676]: pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:41 compute-0 nova_compute[251290]: 2026-02-02 11:53:41.189 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:42.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:53:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:42.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:53:43 compute-0 ceph-mon[74676]: pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:43.570Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:53:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:43.570Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:53:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:43.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:53:43 compute-0 nova_compute[251290]: 2026-02-02 11:53:43.757 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:44 compute-0 nova_compute[251290]: 2026-02-02 11:53:44.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:53:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:44.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/111421547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:53:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/111421547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:53:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:53:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:53:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:44.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:44 compute-0 sudo[287572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:53:44 compute-0 sudo[287572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:44 compute-0 sudo[287572]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:44 compute-0 sudo[287598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:53:44 compute-0 sudo[287598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.050 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.051 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.051 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.051 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.051 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:53:45 compute-0 sudo[287598]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:45 compute-0 ceph-mon[74676]: pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:53:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/465748549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:53:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:53:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3195003875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.534 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:53:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:53:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:53:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:53:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:53:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:53:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:53:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:53:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:53:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:53:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:53:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:53:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:53:45 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:53:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:53:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:53:45 compute-0 sudo[287676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:53:45 compute-0 sudo[287676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:45 compute-0 sudo[287676]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:45 compute-0 sudo[287701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:53:45 compute-0 sudo[287701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.715 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.717 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4505MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.718 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.718 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.863 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.864 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:53:45 compute-0 nova_compute[251290]: 2026-02-02 11:53:45.882 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:53:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:46 compute-0 podman[287784]: 2026-02-02 11:53:46.053982008 +0000 UTC m=+0.042888044 container create c8b61719447e8f148debd773907c5f9199dc0385664dd47635f45c02ebd6ee83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 11:53:46 compute-0 systemd[1]: Started libpod-conmon-c8b61719447e8f148debd773907c5f9199dc0385664dd47635f45c02ebd6ee83.scope.
Feb 02 11:53:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:53:46 compute-0 podman[287784]: 2026-02-02 11:53:46.031321516 +0000 UTC m=+0.020227572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:53:46 compute-0 podman[287784]: 2026-02-02 11:53:46.144649424 +0000 UTC m=+0.133555480 container init c8b61719447e8f148debd773907c5f9199dc0385664dd47635f45c02ebd6ee83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_haslett, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:53:46 compute-0 podman[287784]: 2026-02-02 11:53:46.151041588 +0000 UTC m=+0.139947614 container start c8b61719447e8f148debd773907c5f9199dc0385664dd47635f45c02ebd6ee83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb 02 11:53:46 compute-0 podman[287784]: 2026-02-02 11:53:46.154264811 +0000 UTC m=+0.143170967 container attach c8b61719447e8f148debd773907c5f9199dc0385664dd47635f45c02ebd6ee83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 11:53:46 compute-0 boring_haslett[287801]: 167 167
Feb 02 11:53:46 compute-0 systemd[1]: libpod-c8b61719447e8f148debd773907c5f9199dc0385664dd47635f45c02ebd6ee83.scope: Deactivated successfully.
Feb 02 11:53:46 compute-0 conmon[287801]: conmon c8b61719447e8f148deb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8b61719447e8f148debd773907c5f9199dc0385664dd47635f45c02ebd6ee83.scope/container/memory.events
Feb 02 11:53:46 compute-0 podman[287784]: 2026-02-02 11:53:46.15839658 +0000 UTC m=+0.147302616 container died c8b61719447e8f148debd773907c5f9199dc0385664dd47635f45c02ebd6ee83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_haslett, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebece360ec3e7f05ed9e24960c1b1a30f06a20f8d75ed3bb882d97abf6f41cde-merged.mount: Deactivated successfully.
Feb 02 11:53:46 compute-0 nova_compute[251290]: 2026-02-02 11:53:46.189 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:46 compute-0 podman[287784]: 2026-02-02 11:53:46.201593132 +0000 UTC m=+0.190499158 container remove c8b61719447e8f148debd773907c5f9199dc0385664dd47635f45c02ebd6ee83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb 02 11:53:46 compute-0 systemd[1]: libpod-conmon-c8b61719447e8f148debd773907c5f9199dc0385664dd47635f45c02ebd6ee83.scope: Deactivated successfully.
Feb 02 11:53:46 compute-0 podman[287825]: 2026-02-02 11:53:46.329808159 +0000 UTC m=+0.043203163 container create 3f3a28ad97fef756959578de18db72c2403f050b06f198d0aa5eacc3271a06bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:53:46 compute-0 nova_compute[251290]: 2026-02-02 11:53:46.372 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:53:46 compute-0 systemd[1]: Started libpod-conmon-3f3a28ad97fef756959578de18db72c2403f050b06f198d0aa5eacc3271a06bf.scope.
Feb 02 11:53:46 compute-0 nova_compute[251290]: 2026-02-02 11:53:46.382 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:53:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:46.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:46 compute-0 nova_compute[251290]: 2026-02-02 11:53:46.398 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:53:46 compute-0 nova_compute[251290]: 2026-02-02 11:53:46.400 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:53:46 compute-0 nova_compute[251290]: 2026-02-02 11:53:46.400 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:53:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:53:46 compute-0 podman[287825]: 2026-02-02 11:53:46.311510733 +0000 UTC m=+0.024905747 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7d5afd428a2d4bb8d07fd62204e373026c977ead7730f102f33ea23083dfea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7d5afd428a2d4bb8d07fd62204e373026c977ead7730f102f33ea23083dfea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7d5afd428a2d4bb8d07fd62204e373026c977ead7730f102f33ea23083dfea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7d5afd428a2d4bb8d07fd62204e373026c977ead7730f102f33ea23083dfea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7d5afd428a2d4bb8d07fd62204e373026c977ead7730f102f33ea23083dfea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:46 compute-0 podman[287825]: 2026-02-02 11:53:46.42093227 +0000 UTC m=+0.134327304 container init 3f3a28ad97fef756959578de18db72c2403f050b06f198d0aa5eacc3271a06bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:53:46 compute-0 podman[287825]: 2026-02-02 11:53:46.428146127 +0000 UTC m=+0.141541131 container start 3f3a28ad97fef756959578de18db72c2403f050b06f198d0aa5eacc3271a06bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 11:53:46 compute-0 podman[287825]: 2026-02-02 11:53:46.432541764 +0000 UTC m=+0.145936768 container attach 3f3a28ad97fef756959578de18db72c2403f050b06f198d0aa5eacc3271a06bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:53:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3195003875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:53:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:53:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:53:46 compute-0 ceph-mon[74676]: pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:53:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:53:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:53:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:53:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:53:46 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:53:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2475320083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:53:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3910772060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:53:46 compute-0 exciting_engelbart[287842]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:53:46 compute-0 exciting_engelbart[287842]: --> All data devices are unavailable
Feb 02 11:53:46 compute-0 systemd[1]: libpod-3f3a28ad97fef756959578de18db72c2403f050b06f198d0aa5eacc3271a06bf.scope: Deactivated successfully.
Feb 02 11:53:46 compute-0 podman[287825]: 2026-02-02 11:53:46.756376847 +0000 UTC m=+0.469771851 container died 3f3a28ad97fef756959578de18db72c2403f050b06f198d0aa5eacc3271a06bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb 02 11:53:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:46.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d7d5afd428a2d4bb8d07fd62204e373026c977ead7730f102f33ea23083dfea-merged.mount: Deactivated successfully.
Feb 02 11:53:46 compute-0 podman[287825]: 2026-02-02 11:53:46.79820569 +0000 UTC m=+0.511600694 container remove 3f3a28ad97fef756959578de18db72c2403f050b06f198d0aa5eacc3271a06bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:53:46 compute-0 systemd[1]: libpod-conmon-3f3a28ad97fef756959578de18db72c2403f050b06f198d0aa5eacc3271a06bf.scope: Deactivated successfully.
Feb 02 11:53:46 compute-0 sudo[287701]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:46 compute-0 sudo[287868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:53:46 compute-0 sudo[287868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:46 compute-0 sudo[287868]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:46 compute-0 sudo[287893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:53:46 compute-0 sudo[287893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:46] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:53:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:46] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:53:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:47.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:47 compute-0 podman[287959]: 2026-02-02 11:53:47.329947931 +0000 UTC m=+0.038841868 container create 8e538aada5efd8667d35526dfda58f4e43eb42ebe5e99fe9dc3725a3d3e5cca4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb 02 11:53:47 compute-0 systemd[1]: Started libpod-conmon-8e538aada5efd8667d35526dfda58f4e43eb42ebe5e99fe9dc3725a3d3e5cca4.scope.
Feb 02 11:53:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:53:47 compute-0 podman[287959]: 2026-02-02 11:53:47.407175302 +0000 UTC m=+0.116069259 container init 8e538aada5efd8667d35526dfda58f4e43eb42ebe5e99fe9dc3725a3d3e5cca4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sanderson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:53:47 compute-0 podman[287959]: 2026-02-02 11:53:47.313535069 +0000 UTC m=+0.022429026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:53:47 compute-0 podman[287959]: 2026-02-02 11:53:47.413290688 +0000 UTC m=+0.122184625 container start 8e538aada5efd8667d35526dfda58f4e43eb42ebe5e99fe9dc3725a3d3e5cca4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sanderson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:53:47 compute-0 podman[287959]: 2026-02-02 11:53:47.416706656 +0000 UTC m=+0.125600603 container attach 8e538aada5efd8667d35526dfda58f4e43eb42ebe5e99fe9dc3725a3d3e5cca4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:53:47 compute-0 quirky_sanderson[287976]: 167 167
Feb 02 11:53:47 compute-0 systemd[1]: libpod-8e538aada5efd8667d35526dfda58f4e43eb42ebe5e99fe9dc3725a3d3e5cca4.scope: Deactivated successfully.
Feb 02 11:53:47 compute-0 conmon[287976]: conmon 8e538aada5efd8667d35 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e538aada5efd8667d35526dfda58f4e43eb42ebe5e99fe9dc3725a3d3e5cca4.scope/container/memory.events
Feb 02 11:53:47 compute-0 podman[287959]: 2026-02-02 11:53:47.420246228 +0000 UTC m=+0.129140165 container died 8e538aada5efd8667d35526dfda58f4e43eb42ebe5e99fe9dc3725a3d3e5cca4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sanderson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8be752daf0ecc75f40b6e45a2f6827ecbd58deb85c9047787ccd3f2f6dfb6e92-merged.mount: Deactivated successfully.
Feb 02 11:53:47 compute-0 podman[287959]: 2026-02-02 11:53:47.461453303 +0000 UTC m=+0.170347240 container remove 8e538aada5efd8667d35526dfda58f4e43eb42ebe5e99fe9dc3725a3d3e5cca4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:53:47 compute-0 systemd[1]: libpod-conmon-8e538aada5efd8667d35526dfda58f4e43eb42ebe5e99fe9dc3725a3d3e5cca4.scope: Deactivated successfully.
Feb 02 11:53:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:53:47 compute-0 podman[288001]: 2026-02-02 11:53:47.588243119 +0000 UTC m=+0.036621934 container create 64d093fae534923c0a2cb9cf60ce7180340c9b374cc26bf4c1fcae23a09a8b2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:53:47 compute-0 systemd[1]: Started libpod-conmon-64d093fae534923c0a2cb9cf60ce7180340c9b374cc26bf4c1fcae23a09a8b2e.scope.
Feb 02 11:53:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a411e5cc367dfe69c3a005c9620d6df9becad92da93a3ecbfa21fd3d2d6b2a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a411e5cc367dfe69c3a005c9620d6df9becad92da93a3ecbfa21fd3d2d6b2a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a411e5cc367dfe69c3a005c9620d6df9becad92da93a3ecbfa21fd3d2d6b2a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a411e5cc367dfe69c3a005c9620d6df9becad92da93a3ecbfa21fd3d2d6b2a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:47 compute-0 podman[288001]: 2026-02-02 11:53:47.572301901 +0000 UTC m=+0.020680736 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:53:47 compute-0 podman[288001]: 2026-02-02 11:53:47.675732385 +0000 UTC m=+0.124111230 container init 64d093fae534923c0a2cb9cf60ce7180340c9b374cc26bf4c1fcae23a09a8b2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_cartwright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:53:47 compute-0 podman[288001]: 2026-02-02 11:53:47.682450299 +0000 UTC m=+0.130829114 container start 64d093fae534923c0a2cb9cf60ce7180340c9b374cc26bf4c1fcae23a09a8b2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:53:47 compute-0 podman[288001]: 2026-02-02 11:53:47.688154573 +0000 UTC m=+0.136533388 container attach 64d093fae534923c0a2cb9cf60ce7180340c9b374cc26bf4c1fcae23a09a8b2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]: {
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:     "1": [
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:         {
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "devices": [
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "/dev/loop3"
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             ],
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "lv_name": "ceph_lv0",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "lv_size": "21470642176",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "name": "ceph_lv0",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "tags": {
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.cluster_name": "ceph",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.crush_device_class": "",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.encrypted": "0",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.osd_id": "1",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.type": "block",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.vdo": "0",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:                 "ceph.with_tpm": "0"
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             },
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "type": "block",
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:             "vg_name": "ceph_vg0"
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:         }
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]:     ]
Feb 02 11:53:47 compute-0 wonderful_cartwright[288017]: }
Feb 02 11:53:47 compute-0 systemd[1]: libpod-64d093fae534923c0a2cb9cf60ce7180340c9b374cc26bf4c1fcae23a09a8b2e.scope: Deactivated successfully.
Feb 02 11:53:47 compute-0 podman[288001]: 2026-02-02 11:53:47.965526969 +0000 UTC m=+0.413905784 container died 64d093fae534923c0a2cb9cf60ce7180340c9b374cc26bf4c1fcae23a09a8b2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_cartwright, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a411e5cc367dfe69c3a005c9620d6df9becad92da93a3ecbfa21fd3d2d6b2a2-merged.mount: Deactivated successfully.
Feb 02 11:53:48 compute-0 podman[288001]: 2026-02-02 11:53:48.004177411 +0000 UTC m=+0.452556226 container remove 64d093fae534923c0a2cb9cf60ce7180340c9b374cc26bf4c1fcae23a09a8b2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_cartwright, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:53:48 compute-0 systemd[1]: libpod-conmon-64d093fae534923c0a2cb9cf60ce7180340c9b374cc26bf4c1fcae23a09a8b2e.scope: Deactivated successfully.
Feb 02 11:53:48 compute-0 sudo[287893]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:48 compute-0 sudo[288039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:53:48 compute-0 sudo[288039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:48 compute-0 sudo[288039]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:48 compute-0 sudo[288064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:53:48 compute-0 sudo[288064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:48.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:48 compute-0 podman[288131]: 2026-02-02 11:53:48.530406324 +0000 UTC m=+0.034604016 container create b11097e85580506a242e2ad01d1e0f9ff9afbcf168043862b2582b4821d5dc6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 11:53:48 compute-0 systemd[1]: Started libpod-conmon-b11097e85580506a242e2ad01d1e0f9ff9afbcf168043862b2582b4821d5dc6d.scope.
Feb 02 11:53:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:53:48 compute-0 podman[288131]: 2026-02-02 11:53:48.589086481 +0000 UTC m=+0.093284193 container init b11097e85580506a242e2ad01d1e0f9ff9afbcf168043862b2582b4821d5dc6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:53:48 compute-0 podman[288131]: 2026-02-02 11:53:48.594617931 +0000 UTC m=+0.098815623 container start b11097e85580506a242e2ad01d1e0f9ff9afbcf168043862b2582b4821d5dc6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:53:48 compute-0 podman[288131]: 2026-02-02 11:53:48.5973784 +0000 UTC m=+0.101576112 container attach b11097e85580506a242e2ad01d1e0f9ff9afbcf168043862b2582b4821d5dc6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_ptolemy, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb 02 11:53:48 compute-0 systemd[1]: libpod-b11097e85580506a242e2ad01d1e0f9ff9afbcf168043862b2582b4821d5dc6d.scope: Deactivated successfully.
Feb 02 11:53:48 compute-0 determined_ptolemy[288147]: 167 167
Feb 02 11:53:48 compute-0 conmon[288147]: conmon b11097e85580506a242e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b11097e85580506a242e2ad01d1e0f9ff9afbcf168043862b2582b4821d5dc6d.scope/container/memory.events
Feb 02 11:53:48 compute-0 podman[288131]: 2026-02-02 11:53:48.600458069 +0000 UTC m=+0.104655761 container died b11097e85580506a242e2ad01d1e0f9ff9afbcf168043862b2582b4821d5dc6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Feb 02 11:53:48 compute-0 ceph-mon[74676]: pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:53:48 compute-0 podman[288131]: 2026-02-02 11:53:48.514924149 +0000 UTC m=+0.019121861 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:53:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1664782525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:53:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c1a07fa727d8759bd571007e4b42c5934c094931715bc7fb171729bbda0f92-merged.mount: Deactivated successfully.
Feb 02 11:53:48 compute-0 podman[288131]: 2026-02-02 11:53:48.634828297 +0000 UTC m=+0.139025989 container remove b11097e85580506a242e2ad01d1e0f9ff9afbcf168043862b2582b4821d5dc6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_ptolemy, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb 02 11:53:48 compute-0 systemd[1]: libpod-conmon-b11097e85580506a242e2ad01d1e0f9ff9afbcf168043862b2582b4821d5dc6d.scope: Deactivated successfully.
Feb 02 11:53:48 compute-0 podman[288170]: 2026-02-02 11:53:48.760434969 +0000 UTC m=+0.040326061 container create 43eadd230cf9287dfd8dd1b83c9e40a87fd6e9215ef283b1ff9368389f4c3426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:53:48 compute-0 nova_compute[251290]: 2026-02-02 11:53:48.762 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:48.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:48 compute-0 systemd[1]: Started libpod-conmon-43eadd230cf9287dfd8dd1b83c9e40a87fd6e9215ef283b1ff9368389f4c3426.scope.
Feb 02 11:53:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1429006e6a1a5f79f5697211fe194e62b929c4206bbba57c4b22c75819bfefec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1429006e6a1a5f79f5697211fe194e62b929c4206bbba57c4b22c75819bfefec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1429006e6a1a5f79f5697211fe194e62b929c4206bbba57c4b22c75819bfefec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:48 compute-0 podman[288170]: 2026-02-02 11:53:48.74378925 +0000 UTC m=+0.023680362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1429006e6a1a5f79f5697211fe194e62b929c4206bbba57c4b22c75819bfefec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:53:48 compute-0 podman[288170]: 2026-02-02 11:53:48.849349456 +0000 UTC m=+0.129240578 container init 43eadd230cf9287dfd8dd1b83c9e40a87fd6e9215ef283b1ff9368389f4c3426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:53:48 compute-0 podman[288170]: 2026-02-02 11:53:48.854973008 +0000 UTC m=+0.134864100 container start 43eadd230cf9287dfd8dd1b83c9e40a87fd6e9215ef283b1ff9368389f4c3426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:53:48 compute-0 podman[288170]: 2026-02-02 11:53:48.857868881 +0000 UTC m=+0.137759993 container attach 43eadd230cf9287dfd8dd1b83c9e40a87fd6e9215ef283b1ff9368389f4c3426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:53:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:48.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:49 compute-0 sudo[288245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:53:49 compute-0 sudo[288245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:49 compute-0 sudo[288245]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:49 compute-0 nova_compute[251290]: 2026-02-02 11:53:49.400 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:53:49 compute-0 nova_compute[251290]: 2026-02-02 11:53:49.401 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:53:49 compute-0 nova_compute[251290]: 2026-02-02 11:53:49.401 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:53:49 compute-0 nova_compute[251290]: 2026-02-02 11:53:49.402 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:53:49 compute-0 lvm[288287]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:53:49 compute-0 lvm[288287]: VG ceph_vg0 finished
Feb 02 11:53:49 compute-0 vigilant_joliot[288187]: {}
Feb 02 11:53:49 compute-0 systemd[1]: libpod-43eadd230cf9287dfd8dd1b83c9e40a87fd6e9215ef283b1ff9368389f4c3426.scope: Deactivated successfully.
Feb 02 11:53:49 compute-0 podman[288170]: 2026-02-02 11:53:49.535968402 +0000 UTC m=+0.815859494 container died 43eadd230cf9287dfd8dd1b83c9e40a87fd6e9215ef283b1ff9368389f4c3426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb 02 11:53:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1429006e6a1a5f79f5697211fe194e62b929c4206bbba57c4b22c75819bfefec-merged.mount: Deactivated successfully.
Feb 02 11:53:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2460752998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:53:49 compute-0 podman[288170]: 2026-02-02 11:53:49.672343654 +0000 UTC m=+0.952234756 container remove 43eadd230cf9287dfd8dd1b83c9e40a87fd6e9215ef283b1ff9368389f4c3426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:53:49 compute-0 systemd[1]: libpod-conmon-43eadd230cf9287dfd8dd1b83c9e40a87fd6e9215ef283b1ff9368389f4c3426.scope: Deactivated successfully.
Feb 02 11:53:49 compute-0 sudo[288064]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:53:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:53:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:53:49 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:53:49 compute-0 sudo[288306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:53:49 compute-0 sudo[288306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:53:49 compute-0 sudo[288306]: pam_unix(sudo:session): session closed for user root
Feb 02 11:53:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:50.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:50 compute-0 ceph-mon[74676]: pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:53:50 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:53:50 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:53:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:53:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:50.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:53:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:51 compute-0 nova_compute[251290]: 2026-02-02 11:53:51.191 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:53:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:52.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:52 compute-0 ceph-mon[74676]: pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:53:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:52.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:53:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:53.571Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:53 compute-0 nova_compute[251290]: 2026-02-02 11:53:53.766 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:54 compute-0 nova_compute[251290]: 2026-02-02 11:53:54.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:53:54 compute-0 nova_compute[251290]: 2026-02-02 11:53:54.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:53:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:53:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:54.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:53:54 compute-0 ceph-mon[74676]: pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:53:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:54.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:53:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:53:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:53:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:53:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:53:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:53:56 compute-0 nova_compute[251290]: 2026-02-02 11:53:56.194 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:56.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:56.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:56 compute-0 ceph-mon[74676]: pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:53:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:56] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:53:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:53:56] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb 02 11:53:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:53:57 compute-0 nova_compute[251290]: 2026-02-02 11:53:57.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:53:57 compute-0 nova_compute[251290]: 2026-02-02 11:53:57.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:53:57 compute-0 nova_compute[251290]: 2026-02-02 11:53:57.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:53:57 compute-0 nova_compute[251290]: 2026-02-02 11:53:57.047 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:53:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:57.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:57 compute-0 ceph-mon[74676]: pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:53:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:53:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:53:58.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:53:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:53:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:53:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:53:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:53:58 compute-0 nova_compute[251290]: 2026-02-02 11:53:58.992 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:53:58 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:53:58.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:53:59 compute-0 nova_compute[251290]: 2026-02-02 11:53:59.042 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:53:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:53:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:53:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:53:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:53:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:53:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:53:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:53:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:53:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:54:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:00.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:00 compute-0 ceph-mon[74676]: pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:54:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:00.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:01 compute-0 nova_compute[251290]: 2026-02-02 11:54:01.196 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:02.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:02 compute-0 ceph-mon[74676]: pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:02.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:03.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:03 compute-0 nova_compute[251290]: 2026-02-02 11:54:03.995 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:04.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:04 compute-0 ceph-mon[74676]: pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:04.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:06 compute-0 nova_compute[251290]: 2026-02-02 11:54:06.198 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:06.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:06 compute-0 ceph-mon[74676]: pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:06.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:06] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:54:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:06] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:54:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:07.259Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:54:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:07.259Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:54:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:07.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:08 compute-0 podman[288349]: 2026-02-02 11:54:08.271327447 +0000 UTC m=+0.058236576 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 11:54:08 compute-0 podman[288350]: 2026-02-02 11:54:08.324256159 +0000 UTC m=+0.111254600 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 02 11:54:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:08.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:08 compute-0 ceph-mon[74676]: pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:08.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:08.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:54:08 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:08.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:08 compute-0 nova_compute[251290]: 2026-02-02 11:54:08.997 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:09 compute-0 sudo[288396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:54:09 compute-0 sudo[288396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:09 compute-0 sudo[288396]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:54:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:10.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:54:10 compute-0 ceph-mon[74676]: pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:10.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:11 compute-0 nova_compute[251290]: 2026-02-02 11:54:11.201 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:12.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:12 compute-0 ceph-mon[74676]: pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:12.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:13.573Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:14 compute-0 nova_compute[251290]: 2026-02-02 11:54:14.001 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:14.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:54:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:54:14 compute-0 ceph-mon[74676]: pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:54:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:14.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:16 compute-0 nova_compute[251290]: 2026-02-02 11:54:16.203 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:16.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:16 compute-0 ceph-mon[74676]: pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:16.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:16] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:54:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:16] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:54:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:17.260Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:54:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:17.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:54:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:18.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:18 compute-0 ceph-mon[74676]: pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:18.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:18.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:54:18 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:18.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:19 compute-0 nova_compute[251290]: 2026-02-02 11:54:19.004 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:20.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:20 compute-0 ceph-mon[74676]: pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:20.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:21 compute-0 nova_compute[251290]: 2026-02-02 11:54:21.205 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:22.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:54:22.696 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:54:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:54:22.697 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:54:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:54:22.697 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:54:22 compute-0 ceph-mon[74676]: pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:22.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:23.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:54:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:23.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:24 compute-0 nova_compute[251290]: 2026-02-02 11:54:24.008 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:24.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:24 compute-0 ceph-mon[74676]: pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:24.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:26 compute-0 nova_compute[251290]: 2026-02-02 11:54:26.207 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:26.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:26 compute-0 ceph-mon[74676]: pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:26.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:26] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:54:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:26] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb 02 11:54:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:27.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:28.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:28.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:28 compute-0 ceph-mon[74676]: pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:28 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:28.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:29 compute-0 nova_compute[251290]: 2026-02-02 11:54:29.012 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:29 compute-0 sudo[288441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:54:29 compute-0 sudo[288441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:29 compute-0 sudo[288441]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:54:29
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', '.nfs', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'volumes']
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:54:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:54:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:54:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:54:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:54:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:54:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:30.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:30.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:30 compute-0 ceph-mon[74676]: pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:31 compute-0 nova_compute[251290]: 2026-02-02 11:54:31.209 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:31 compute-0 ceph-mgr[74969]: [devicehealth INFO root] Check health
Feb 02 11:54:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:32.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:54:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:32.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:54:32 compute-0 ceph-mon[74676]: pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:33.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:34 compute-0 nova_compute[251290]: 2026-02-02 11:54:34.016 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:34.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:34.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:34 compute-0 ceph-mon[74676]: pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:36 compute-0 nova_compute[251290]: 2026-02-02 11:54:36.312 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:36.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:36 compute-0 ceph-mon[74676]: pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:36] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:54:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:36] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:54:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:37.262Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:38.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:38 compute-0 ceph-mon[74676]: pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:38 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:38.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:39 compute-0 nova_compute[251290]: 2026-02-02 11:54:39.017 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:39 compute-0 podman[288476]: 2026-02-02 11:54:39.267794104 +0000 UTC m=+0.056805965 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 11:54:39 compute-0 podman[288477]: 2026-02-02 11:54:39.302026488 +0000 UTC m=+0.086845798 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:54:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:40.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:40.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:40 compute-0 ceph-mon[74676]: pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:41 compute-0 nova_compute[251290]: 2026-02-02 11:54:41.314 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:42.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:42.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:42 compute-0 ceph-mon[74676]: pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:43.576Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:44 compute-0 nova_compute[251290]: 2026-02-02 11:54:44.019 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:44.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:54:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:54:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:54:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:44.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:54:44 compute-0 ceph-mon[74676]: pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2062714416' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:54:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/2062714416' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:54:44 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.043 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.043 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.043 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.043 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.044 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:54:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:54:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109422663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.483 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:54:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.612 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.613 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4496MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.614 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.614 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.808 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.809 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:54:45 compute-0 nova_compute[251290]: 2026-02-02 11:54:45.915 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:54:45 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3109422663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:54:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:46 compute-0 nova_compute[251290]: 2026-02-02 11:54:46.365 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:54:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830844227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:54:46 compute-0 nova_compute[251290]: 2026-02-02 11:54:46.443 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:54:46 compute-0 nova_compute[251290]: 2026-02-02 11:54:46.449 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:54:46 compute-0 nova_compute[251290]: 2026-02-02 11:54:46.464 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:54:46 compute-0 nova_compute[251290]: 2026-02-02 11:54:46.466 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:54:46 compute-0 nova_compute[251290]: 2026-02-02 11:54:46.467 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:54:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:46.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:46.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:46] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb 02 11:54:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:46] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb 02 11:54:46 compute-0 ceph-mon[74676]: pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2100631930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:54:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2830844227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:54:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:47.263Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:54:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:47.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:47 compute-0 nova_compute[251290]: 2026-02-02 11:54:47.471 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:47 compute-0 nova_compute[251290]: 2026-02-02 11:54:47.471 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=cleanup t=2026-02-02T11:54:47.983832991Z level=info msg="Completed cleanup jobs" duration=11.970577ms
Feb 02 11:54:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1144539895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:54:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3074919612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:54:48 compute-0 nova_compute[251290]: 2026-02-02 11:54:48.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=plugins.update.checker t=2026-02-02T11:54:48.085278578Z level=info msg="Update check succeeded" duration=46.117845ms
Feb 02 11:54:48 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-grafana-compute-0[105144]: logger=grafana.update.checker t=2026-02-02T11:54:48.087327668Z level=info msg="Update check succeeded" duration=51.076249ms
Feb 02 11:54:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:48.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:48.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:49.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:49 compute-0 ceph-mon[74676]: pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:54:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1346516574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:54:49 compute-0 nova_compute[251290]: 2026-02-02 11:54:49.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:49 compute-0 nova_compute[251290]: 2026-02-02 11:54:49.023 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:49 compute-0 sudo[288570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:54:49 compute-0 sudo[288570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:49 compute-0 sudo[288570]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:50 compute-0 nova_compute[251290]: 2026-02-02 11:54:50.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:50 compute-0 sudo[288596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:54:50 compute-0 ceph-mon[74676]: pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:54:50 compute-0 sudo[288596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:50 compute-0 sudo[288596]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:50 compute-0 sudo[288621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Feb 02 11:54:50 compute-0 sudo[288621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:50 compute-0 sudo[288621]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:54:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:54:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:54:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:54:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb 02 11:54:50 compute-0 sudo[288666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:54:50 compute-0 sudo[288666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:50 compute-0 sudo[288666]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:50.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb 02 11:54:50 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:50 compute-0 sudo[288691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:54:50 compute-0 sudo[288691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:50.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:50 compute-0 sudo[288691]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:51 compute-0 nova_compute[251290]: 2026-02-02 11:54:51.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:54:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:54:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:54:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:54:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:54:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:54:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:54:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:54:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:54:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:54:51 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:54:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:54:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:54:51 compute-0 sudo[288749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:54:51 compute-0 sudo[288749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:51 compute-0 sudo[288749]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:51 compute-0 sudo[288774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:54:51 compute-0 sudo[288774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:51 compute-0 nova_compute[251290]: 2026-02-02 11:54:51.367 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:54:51 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:54:51 compute-0 podman[288841]: 2026-02-02 11:54:51.598837121 +0000 UTC m=+0.036600701 container create 2ac5acdd7711a054e2044447a490e80e2ebbc1bb16bf650c70cd6056ba6dd0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dijkstra, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb 02 11:54:51 compute-0 systemd[1]: Started libpod-conmon-2ac5acdd7711a054e2044447a490e80e2ebbc1bb16bf650c70cd6056ba6dd0be.scope.
Feb 02 11:54:51 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:54:51 compute-0 podman[288841]: 2026-02-02 11:54:51.662880146 +0000 UTC m=+0.100643746 container init 2ac5acdd7711a054e2044447a490e80e2ebbc1bb16bf650c70cd6056ba6dd0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb 02 11:54:51 compute-0 podman[288841]: 2026-02-02 11:54:51.670027623 +0000 UTC m=+0.107791213 container start 2ac5acdd7711a054e2044447a490e80e2ebbc1bb16bf650c70cd6056ba6dd0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dijkstra, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:54:51 compute-0 podman[288841]: 2026-02-02 11:54:51.673941096 +0000 UTC m=+0.111704696 container attach 2ac5acdd7711a054e2044447a490e80e2ebbc1bb16bf650c70cd6056ba6dd0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 11:54:51 compute-0 sharp_dijkstra[288857]: 167 167
Feb 02 11:54:51 compute-0 systemd[1]: libpod-2ac5acdd7711a054e2044447a490e80e2ebbc1bb16bf650c70cd6056ba6dd0be.scope: Deactivated successfully.
Feb 02 11:54:51 compute-0 podman[288841]: 2026-02-02 11:54:51.67546846 +0000 UTC m=+0.113232030 container died 2ac5acdd7711a054e2044447a490e80e2ebbc1bb16bf650c70cd6056ba6dd0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:54:51 compute-0 podman[288841]: 2026-02-02 11:54:51.582733065 +0000 UTC m=+0.020496665 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:54:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-442ea88ddffdd17438a414e388639f23532969cfd2ffa95972e6385ff182a6c4-merged.mount: Deactivated successfully.
Feb 02 11:54:51 compute-0 podman[288841]: 2026-02-02 11:54:51.717059745 +0000 UTC m=+0.154823325 container remove 2ac5acdd7711a054e2044447a490e80e2ebbc1bb16bf650c70cd6056ba6dd0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:54:51 compute-0 systemd[1]: libpod-conmon-2ac5acdd7711a054e2044447a490e80e2ebbc1bb16bf650c70cd6056ba6dd0be.scope: Deactivated successfully.
Feb 02 11:54:51 compute-0 podman[288883]: 2026-02-02 11:54:51.847320057 +0000 UTC m=+0.039296389 container create a07ae48837445f8431bc03069cf37b26400becf05ac7f2a14269b96500db4376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:54:51 compute-0 systemd[1]: Started libpod-conmon-a07ae48837445f8431bc03069cf37b26400becf05ac7f2a14269b96500db4376.scope.
Feb 02 11:54:51 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765c1ba278510dd88ff12d208bef875599a0e81173e0a314eb70c983e82a521e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765c1ba278510dd88ff12d208bef875599a0e81173e0a314eb70c983e82a521e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765c1ba278510dd88ff12d208bef875599a0e81173e0a314eb70c983e82a521e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765c1ba278510dd88ff12d208bef875599a0e81173e0a314eb70c983e82a521e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765c1ba278510dd88ff12d208bef875599a0e81173e0a314eb70c983e82a521e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:51 compute-0 podman[288883]: 2026-02-02 11:54:51.922521354 +0000 UTC m=+0.114497686 container init a07ae48837445f8431bc03069cf37b26400becf05ac7f2a14269b96500db4376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb 02 11:54:51 compute-0 podman[288883]: 2026-02-02 11:54:51.830328295 +0000 UTC m=+0.022304647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:54:51 compute-0 podman[288883]: 2026-02-02 11:54:51.928044564 +0000 UTC m=+0.120020896 container start a07ae48837445f8431bc03069cf37b26400becf05ac7f2a14269b96500db4376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_edison, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 11:54:51 compute-0 podman[288883]: 2026-02-02 11:54:51.93237129 +0000 UTC m=+0.124347642 container attach a07ae48837445f8431bc03069cf37b26400becf05ac7f2a14269b96500db4376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:54:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:52 compute-0 mystifying_edison[288900]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:54:52 compute-0 mystifying_edison[288900]: --> All data devices are unavailable
Feb 02 11:54:52 compute-0 systemd[1]: libpod-a07ae48837445f8431bc03069cf37b26400becf05ac7f2a14269b96500db4376.scope: Deactivated successfully.
Feb 02 11:54:52 compute-0 podman[288915]: 2026-02-02 11:54:52.28840787 +0000 UTC m=+0.022733680 container died a07ae48837445f8431bc03069cf37b26400becf05ac7f2a14269b96500db4376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_edison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb 02 11:54:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-765c1ba278510dd88ff12d208bef875599a0e81173e0a314eb70c983e82a521e-merged.mount: Deactivated successfully.
Feb 02 11:54:52 compute-0 podman[288915]: 2026-02-02 11:54:52.333750933 +0000 UTC m=+0.068076723 container remove a07ae48837445f8431bc03069cf37b26400becf05ac7f2a14269b96500db4376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:54:52 compute-0 systemd[1]: libpod-conmon-a07ae48837445f8431bc03069cf37b26400becf05ac7f2a14269b96500db4376.scope: Deactivated successfully.
Feb 02 11:54:52 compute-0 sudo[288774]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:52 compute-0 sudo[288931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:54:52 compute-0 sudo[288931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:52 compute-0 sudo[288931]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:52.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:52 compute-0 sudo[288956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:54:52 compute-0 sudo[288956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:52 compute-0 ceph-mon[74676]: pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:54:52 compute-0 podman[289021]: 2026-02-02 11:54:52.85340169 +0000 UTC m=+0.038331291 container create 2f273dc10f380a621017c0204f888e2af0d2ffbeb342c55c55a44bef17703dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 11:54:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:52.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:52 compute-0 systemd[1]: Started libpod-conmon-2f273dc10f380a621017c0204f888e2af0d2ffbeb342c55c55a44bef17703dd7.scope.
Feb 02 11:54:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:54:52 compute-0 podman[289021]: 2026-02-02 11:54:52.836742988 +0000 UTC m=+0.021672619 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:54:52 compute-0 podman[289021]: 2026-02-02 11:54:52.936660361 +0000 UTC m=+0.121590052 container init 2f273dc10f380a621017c0204f888e2af0d2ffbeb342c55c55a44bef17703dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:54:52 compute-0 podman[289021]: 2026-02-02 11:54:52.945018333 +0000 UTC m=+0.129947984 container start 2f273dc10f380a621017c0204f888e2af0d2ffbeb342c55c55a44bef17703dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kalam, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:54:52 compute-0 podman[289021]: 2026-02-02 11:54:52.948411922 +0000 UTC m=+0.133341513 container attach 2f273dc10f380a621017c0204f888e2af0d2ffbeb342c55c55a44bef17703dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:54:52 compute-0 goofy_kalam[289038]: 167 167
Feb 02 11:54:52 compute-0 systemd[1]: libpod-2f273dc10f380a621017c0204f888e2af0d2ffbeb342c55c55a44bef17703dd7.scope: Deactivated successfully.
Feb 02 11:54:52 compute-0 podman[289021]: 2026-02-02 11:54:52.951009017 +0000 UTC m=+0.135938618 container died 2f273dc10f380a621017c0204f888e2af0d2ffbeb342c55c55a44bef17703dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kalam, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:54:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-045dc4d26e64a2f25d4c8380229548d38aad2a54e4eb4d1300d6a3f21dcded75-merged.mount: Deactivated successfully.
Feb 02 11:54:52 compute-0 podman[289021]: 2026-02-02 11:54:52.989149761 +0000 UTC m=+0.174079372 container remove 2f273dc10f380a621017c0204f888e2af0d2ffbeb342c55c55a44bef17703dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:54:52 compute-0 systemd[1]: libpod-conmon-2f273dc10f380a621017c0204f888e2af0d2ffbeb342c55c55a44bef17703dd7.scope: Deactivated successfully.
Feb 02 11:54:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:54:53 compute-0 podman[289063]: 2026-02-02 11:54:53.129012181 +0000 UTC m=+0.045689224 container create 21404e9e49ceca47449de4ab036b660568f560f7027e872b8ed550a505d254e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_banach, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb 02 11:54:53 compute-0 systemd[1]: Started libpod-conmon-21404e9e49ceca47449de4ab036b660568f560f7027e872b8ed550a505d254e2.scope.
Feb 02 11:54:53 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae70b691b63891cb6aec9adf572050e8162a003b14e59bd9d218e567bb9d603c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae70b691b63891cb6aec9adf572050e8162a003b14e59bd9d218e567bb9d603c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae70b691b63891cb6aec9adf572050e8162a003b14e59bd9d218e567bb9d603c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae70b691b63891cb6aec9adf572050e8162a003b14e59bd9d218e567bb9d603c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:53 compute-0 podman[289063]: 2026-02-02 11:54:53.10964402 +0000 UTC m=+0.026321113 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:54:53 compute-0 podman[289063]: 2026-02-02 11:54:53.213015104 +0000 UTC m=+0.129692187 container init 21404e9e49ceca47449de4ab036b660568f560f7027e872b8ed550a505d254e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:54:53 compute-0 podman[289063]: 2026-02-02 11:54:53.218505823 +0000 UTC m=+0.135182876 container start 21404e9e49ceca47449de4ab036b660568f560f7027e872b8ed550a505d254e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:54:53 compute-0 podman[289063]: 2026-02-02 11:54:53.221601852 +0000 UTC m=+0.138278905 container attach 21404e9e49ceca47449de4ab036b660568f560f7027e872b8ed550a505d254e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:54:53 compute-0 determined_banach[289079]: {
Feb 02 11:54:53 compute-0 determined_banach[289079]:     "1": [
Feb 02 11:54:53 compute-0 determined_banach[289079]:         {
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "devices": [
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "/dev/loop3"
Feb 02 11:54:53 compute-0 determined_banach[289079]:             ],
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "lv_name": "ceph_lv0",
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "lv_size": "21470642176",
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "name": "ceph_lv0",
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "tags": {
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.cluster_name": "ceph",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.crush_device_class": "",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.encrypted": "0",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.osd_id": "1",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.type": "block",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.vdo": "0",
Feb 02 11:54:53 compute-0 determined_banach[289079]:                 "ceph.with_tpm": "0"
Feb 02 11:54:53 compute-0 determined_banach[289079]:             },
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "type": "block",
Feb 02 11:54:53 compute-0 determined_banach[289079]:             "vg_name": "ceph_vg0"
Feb 02 11:54:53 compute-0 determined_banach[289079]:         }
Feb 02 11:54:53 compute-0 determined_banach[289079]:     ]
Feb 02 11:54:53 compute-0 determined_banach[289079]: }
Feb 02 11:54:53 compute-0 systemd[1]: libpod-21404e9e49ceca47449de4ab036b660568f560f7027e872b8ed550a505d254e2.scope: Deactivated successfully.
Feb 02 11:54:53 compute-0 podman[289063]: 2026-02-02 11:54:53.500266112 +0000 UTC m=+0.416943175 container died 21404e9e49ceca47449de4ab036b660568f560f7027e872b8ed550a505d254e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_banach, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:54:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae70b691b63891cb6aec9adf572050e8162a003b14e59bd9d218e567bb9d603c-merged.mount: Deactivated successfully.
Feb 02 11:54:53 compute-0 podman[289063]: 2026-02-02 11:54:53.540207288 +0000 UTC m=+0.456884341 container remove 21404e9e49ceca47449de4ab036b660568f560f7027e872b8ed550a505d254e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_banach, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:54:53 compute-0 systemd[1]: libpod-conmon-21404e9e49ceca47449de4ab036b660568f560f7027e872b8ed550a505d254e2.scope: Deactivated successfully.
Feb 02 11:54:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:53.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:54:53 compute-0 sudo[288956]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:53 compute-0 sudo[289101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:54:53 compute-0 sudo[289101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:53 compute-0 sudo[289101]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:53 compute-0 sudo[289126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:54:53 compute-0 sudo[289126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:54 compute-0 nova_compute[251290]: 2026-02-02 11:54:54.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:54 compute-0 nova_compute[251290]: 2026-02-02 11:54:54.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 11:54:54 compute-0 nova_compute[251290]: 2026-02-02 11:54:54.026 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:54 compute-0 nova_compute[251290]: 2026-02-02 11:54:54.044 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 11:54:54 compute-0 nova_compute[251290]: 2026-02-02 11:54:54.045 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:54 compute-0 podman[289192]: 2026-02-02 11:54:54.068176137 +0000 UTC m=+0.036846168 container create 0896d93a259d376c77026230fe77aa8f97893b25d451d036f602add8efd57ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_diffie, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:54:54 compute-0 systemd[1]: Started libpod-conmon-0896d93a259d376c77026230fe77aa8f97893b25d451d036f602add8efd57ee8.scope.
Feb 02 11:54:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:54:54 compute-0 podman[289192]: 2026-02-02 11:54:54.131575193 +0000 UTC m=+0.100245224 container init 0896d93a259d376c77026230fe77aa8f97893b25d451d036f602add8efd57ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:54:54 compute-0 podman[289192]: 2026-02-02 11:54:54.139618006 +0000 UTC m=+0.108288037 container start 0896d93a259d376c77026230fe77aa8f97893b25d451d036f602add8efd57ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:54:54 compute-0 podman[289192]: 2026-02-02 11:54:54.143621991 +0000 UTC m=+0.112292022 container attach 0896d93a259d376c77026230fe77aa8f97893b25d451d036f602add8efd57ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:54:54 compute-0 gallant_diffie[289207]: 167 167
Feb 02 11:54:54 compute-0 systemd[1]: libpod-0896d93a259d376c77026230fe77aa8f97893b25d451d036f602add8efd57ee8.scope: Deactivated successfully.
Feb 02 11:54:54 compute-0 conmon[289207]: conmon 0896d93a259d376c7702 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0896d93a259d376c77026230fe77aa8f97893b25d451d036f602add8efd57ee8.scope/container/memory.events
Feb 02 11:54:54 compute-0 podman[289192]: 2026-02-02 11:54:54.148978026 +0000 UTC m=+0.117648057 container died 0896d93a259d376c77026230fe77aa8f97893b25d451d036f602add8efd57ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_diffie, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:54:54 compute-0 podman[289192]: 2026-02-02 11:54:54.052603196 +0000 UTC m=+0.021273227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-cacbb5096720baf4ef162a4765761f8af3164829ae4be2d5ff002f9e3f6f39fc-merged.mount: Deactivated successfully.
Feb 02 11:54:54 compute-0 podman[289192]: 2026-02-02 11:54:54.184425532 +0000 UTC m=+0.153095563 container remove 0896d93a259d376c77026230fe77aa8f97893b25d451d036f602add8efd57ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:54:54 compute-0 systemd[1]: libpod-conmon-0896d93a259d376c77026230fe77aa8f97893b25d451d036f602add8efd57ee8.scope: Deactivated successfully.
Feb 02 11:54:54 compute-0 podman[289230]: 2026-02-02 11:54:54.316080165 +0000 UTC m=+0.038817265 container create 36dbe0ce6b81ca756efb116a81c2d23d78fad32b8b9b3291de2e227d56d08ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:54:54 compute-0 systemd[1]: Started libpod-conmon-36dbe0ce6b81ca756efb116a81c2d23d78fad32b8b9b3291de2e227d56d08ed7.scope.
Feb 02 11:54:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f16404b5c6267c6bed49ac133213992866e26dcd5714e4e3004000e73d537b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f16404b5c6267c6bed49ac133213992866e26dcd5714e4e3004000e73d537b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f16404b5c6267c6bed49ac133213992866e26dcd5714e4e3004000e73d537b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f16404b5c6267c6bed49ac133213992866e26dcd5714e4e3004000e73d537b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:54:54 compute-0 podman[289230]: 2026-02-02 11:54:54.392270101 +0000 UTC m=+0.115007211 container init 36dbe0ce6b81ca756efb116a81c2d23d78fad32b8b9b3291de2e227d56d08ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb 02 11:54:54 compute-0 podman[289230]: 2026-02-02 11:54:54.298907318 +0000 UTC m=+0.021644438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:54:54 compute-0 podman[289230]: 2026-02-02 11:54:54.398423849 +0000 UTC m=+0.121160949 container start 36dbe0ce6b81ca756efb116a81c2d23d78fad32b8b9b3291de2e227d56d08ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:54:54 compute-0 podman[289230]: 2026-02-02 11:54:54.401455847 +0000 UTC m=+0.124193127 container attach 36dbe0ce6b81ca756efb116a81c2d23d78fad32b8b9b3291de2e227d56d08ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:54:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:54.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:54 compute-0 ceph-mon[74676]: pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:54:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:54.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:54 compute-0 lvm[289321]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:54:54 compute-0 lvm[289321]: VG ceph_vg0 finished
Feb 02 11:54:55 compute-0 youthful_booth[289247]: {}
Feb 02 11:54:55 compute-0 systemd[1]: libpod-36dbe0ce6b81ca756efb116a81c2d23d78fad32b8b9b3291de2e227d56d08ed7.scope: Deactivated successfully.
Feb 02 11:54:55 compute-0 podman[289230]: 2026-02-02 11:54:55.05183283 +0000 UTC m=+0.774569930 container died 36dbe0ce6b81ca756efb116a81c2d23d78fad32b8b9b3291de2e227d56d08ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb 02 11:54:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-74f16404b5c6267c6bed49ac133213992866e26dcd5714e4e3004000e73d537b-merged.mount: Deactivated successfully.
Feb 02 11:54:55 compute-0 podman[289230]: 2026-02-02 11:54:55.100479599 +0000 UTC m=+0.823216699 container remove 36dbe0ce6b81ca756efb116a81c2d23d78fad32b8b9b3291de2e227d56d08ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:54:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:54:55 compute-0 systemd[1]: libpod-conmon-36dbe0ce6b81ca756efb116a81c2d23d78fad32b8b9b3291de2e227d56d08ed7.scope: Deactivated successfully.
Feb 02 11:54:55 compute-0 sudo[289126]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:54:55 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:54:55 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:55 compute-0 sudo[289337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:54:55 compute-0 sudo[289337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:54:55 compute-0 sudo[289337]: pam_unix(sudo:session): session closed for user root
Feb 02 11:54:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:54:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:54:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:54:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:54:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:54:56 compute-0 nova_compute[251290]: 2026-02-02 11:54:56.074 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:56 compute-0 nova_compute[251290]: 2026-02-02 11:54:56.074 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:54:56 compute-0 ceph-mon[74676]: pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:54:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:54:56 compute-0 nova_compute[251290]: 2026-02-02 11:54:56.369 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:56.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:54:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:56.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:54:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:56] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb 02 11:54:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:54:56] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb 02 11:54:57 compute-0 nova_compute[251290]: 2026-02-02 11:54:57.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:54:57 compute-0 nova_compute[251290]: 2026-02-02 11:54:57.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:54:57 compute-0 nova_compute[251290]: 2026-02-02 11:54:57.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:54:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:54:57 compute-0 nova_compute[251290]: 2026-02-02 11:54:57.037 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:54:57 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:54:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:57.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:58 compute-0 ceph-mon[74676]: pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:54:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:54:58.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:54:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:54:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:54:58.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:54:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:54:59.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:54:59 compute-0 nova_compute[251290]: 2026-02-02 11:54:59.030 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:54:59 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:54:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:54:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:54:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:54:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:54:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:54:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:54:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:54:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:55:00 compute-0 ceph-mon[74676]: pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Feb 02 11:55:00 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:55:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:00.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:00.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:55:01 compute-0 nova_compute[251290]: 2026-02-02 11:55:01.371 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:02 compute-0 ceph-mon[74676]: pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:55:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000058s ======
Feb 02 11:55:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:02.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Feb 02 11:55:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:02.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:03.580Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:04 compute-0 nova_compute[251290]: 2026-02-02 11:55:04.035 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:04 compute-0 ceph-mon[74676]: pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:04.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:04.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:05 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:06 compute-0 ceph-mon[74676]: pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:06 compute-0 nova_compute[251290]: 2026-02-02 11:55:06.373 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:06.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:06.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:06] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:55:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:06] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:55:07 compute-0 nova_compute[251290]: 2026-02-02 11:55:07.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:55:07 compute-0 nova_compute[251290]: 2026-02-02 11:55:07.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 11:55:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:07 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:07.266Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:55:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:07.267Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:55:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:07.268Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:08 compute-0 ceph-mon[74676]: pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:08.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:08.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:09.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:55:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:09.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:55:09 compute-0 nova_compute[251290]: 2026-02-02 11:55:09.039 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:09 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:09 compute-0 sudo[289376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:55:09 compute-0 sudo[289376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:09 compute-0 sudo[289376]: pam_unix(sudo:session): session closed for user root
Feb 02 11:55:09 compute-0 podman[289400]: 2026-02-02 11:55:09.673863303 +0000 UTC m=+0.056502667 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible)
Feb 02 11:55:09 compute-0 podman[289401]: 2026-02-02 11:55:09.702894943 +0000 UTC m=+0.085091805 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:55:10 compute-0 ceph-mon[74676]: pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:10.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:10.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:11 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:11 compute-0 nova_compute[251290]: 2026-02-02 11:55:11.375 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:12 compute-0 ceph-mon[74676]: pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:12.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:12.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:13 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:13.582Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:14 compute-0 nova_compute[251290]: 2026-02-02 11:55:14.043 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:14 compute-0 ceph-mon[74676]: pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:14.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:55:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:55:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:14.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:15 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:55:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:16 compute-0 ceph-mon[74676]: pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:16 compute-0 nova_compute[251290]: 2026-02-02 11:55:16.376 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:16.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:16] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:55:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:16] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:55:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:17 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:17.268Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:55:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:17.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:55:18 compute-0 ceph-mon[74676]: pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:18.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:18.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:19.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:19 compute-0 nova_compute[251290]: 2026-02-02 11:55:19.047 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:19 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:20 compute-0 ceph-mon[74676]: pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:20.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:21 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:21 compute-0 nova_compute[251290]: 2026-02-02 11:55:21.377 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:22 compute-0 ceph-mon[74676]: pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:22.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:55:22.697 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:55:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:55:22.698 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:55:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:55:22.698 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:55:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:23 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:23 compute-0 sshd-session[289460]: Invalid user lighthouse from 80.94.92.186 port 43782
Feb 02 11:55:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:23.583Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:55:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:23.583Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:55:23 compute-0 sshd-session[289460]: Connection closed by invalid user lighthouse 80.94.92.186 port 43782 [preauth]
Feb 02 11:55:24 compute-0 nova_compute[251290]: 2026-02-02 11:55:24.051 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:24 compute-0 ceph-mon[74676]: pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:24.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:24.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:25 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:26 compute-0 nova_compute[251290]: 2026-02-02 11:55:26.379 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:26 compute-0 ceph-mon[74676]: pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:26.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:26.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:26] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:55:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:26] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:55:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:27 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:27.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:28 compute-0 ceph-mon[74676]: pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:28.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:28.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:29.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:55:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:29.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:55:29 compute-0 nova_compute[251290]: 2026-02-02 11:55:29.055 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:55:29
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.control', '.mgr', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.rgw.root', '.nfs']
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:55:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:55:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:55:29 compute-0 sudo[289469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:55:29 compute-0 sudo[289469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:29 compute-0 sudo[289469]: pam_unix(sudo:session): session closed for user root
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:55:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:55:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:55:30 compute-0 ceph-mon[74676]: pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:55:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:30.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:30.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:31 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:31 compute-0 nova_compute[251290]: 2026-02-02 11:55:31.381 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:32 compute-0 ceph-mon[74676]: pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:32.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:32.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:33 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:33.583Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:34 compute-0 nova_compute[251290]: 2026-02-02 11:55:34.059 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:34 compute-0 ceph-mon[74676]: pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:34.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000058s ======
Feb 02 11:55:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:34.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Feb 02 11:55:35 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:36 compute-0 nova_compute[251290]: 2026-02-02 11:55:36.383 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:36 compute-0 ceph-mon[74676]: pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:36.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:36.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:36] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb 02 11:55:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:36] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb 02 11:55:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:37 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:37.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:38 compute-0 ceph-mon[74676]: pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:38.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:38.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:39.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:39 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:39 compute-0 nova_compute[251290]: 2026-02-02 11:55:39.196 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:40 compute-0 podman[289505]: 2026-02-02 11:55:40.283816349 +0000 UTC m=+0.075614261 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb 02 11:55:40 compute-0 podman[289506]: 2026-02-02 11:55:40.313367605 +0000 UTC m=+0.103529279 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 11:55:40 compute-0 ceph-mon[74676]: pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:40.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:55:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:40.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:55:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:41 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:41 compute-0 nova_compute[251290]: 2026-02-02 11:55:41.386 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:42 compute-0 ceph-mon[74676]: pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:42.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:42.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:43 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:43.585Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:44 compute-0 nova_compute[251290]: 2026-02-02 11:55:44.198 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:44 compute-0 ceph-mon[74676]: pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3666604712' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:55:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3666604712' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.547972) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344548038, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1550, "num_deletes": 251, "total_data_size": 3066095, "memory_usage": 3118864, "flush_reason": "Manual Compaction"}
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Feb 02 11:55:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:44.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344571802, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 2989247, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35310, "largest_seqno": 36859, "table_properties": {"data_size": 2981864, "index_size": 4390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15236, "raw_average_key_size": 20, "raw_value_size": 2967194, "raw_average_value_size": 3945, "num_data_blocks": 189, "num_entries": 752, "num_filter_entries": 752, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033193, "oldest_key_time": 1770033193, "file_creation_time": 1770033344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 23900 microseconds, and 6288 cpu microseconds.
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.571871) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 2989247 bytes OK
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.571904) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.573712) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.573782) EVENT_LOG_v1 {"time_micros": 1770033344573769, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.573818) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3059434, prev total WAL file size 3059434, number of live WAL files 2.
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.574824) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2919KB)], [77(11MB)]
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344574885, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14887969, "oldest_snapshot_seqno": -1}
Feb 02 11:55:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:55:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6732 keys, 12662139 bytes, temperature: kUnknown
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344671328, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12662139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12620460, "index_size": 23775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 176998, "raw_average_key_size": 26, "raw_value_size": 12502234, "raw_average_value_size": 1857, "num_data_blocks": 930, "num_entries": 6732, "num_filter_entries": 6732, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770033344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.671622) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12662139 bytes
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.672924) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.2 rd, 131.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 11.3 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(9.2) write-amplify(4.2) OK, records in: 7252, records dropped: 520 output_compression: NoCompression
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.672947) EVENT_LOG_v1 {"time_micros": 1770033344672936, "job": 44, "event": "compaction_finished", "compaction_time_micros": 96527, "compaction_time_cpu_micros": 22906, "output_level": 6, "num_output_files": 1, "total_output_size": 12662139, "num_input_records": 7252, "num_output_records": 6732, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344673608, "job": 44, "event": "table_file_deletion", "file_number": 79}
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344675562, "job": 44, "event": "table_file_deletion", "file_number": 77}
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.574700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.675655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.675664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.675666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.675668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:55:44 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:55:44.675671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:55:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:44.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:45 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:55:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.345 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.367 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.367 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.367 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.367 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.368 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.387 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:46.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:46 compute-0 ceph-mon[74676]: pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:55:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2336817765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.817 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:55:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:46.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.975 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.976 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4507MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.977 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:55:46 compute-0 nova_compute[251290]: 2026-02-02 11:55:46.977 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:55:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:46] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:55:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:46] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:55:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:47 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.169 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.169 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.183 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing inventories for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.197 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating ProviderTree inventory for provider 92919e7b-7846-4645-9401-9fd55bbbf435 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.197 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Updating inventory in ProviderTree for provider 92919e7b-7846-4645-9401-9fd55bbbf435 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.212 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing aggregate associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.234 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Refreshing trait associations for resource provider 92919e7b-7846-4645-9401-9fd55bbbf435, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.251 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:55:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:47.272Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:55:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:47.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2336817765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:55:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:55:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1662588183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.701 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.706 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.738 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.740 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:55:47 compute-0 nova_compute[251290]: 2026-02-02 11:55:47.740 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:55:48 compute-0 nova_compute[251290]: 2026-02-02 11:55:48.415 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:55:48 compute-0 nova_compute[251290]: 2026-02-02 11:55:48.416 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:55:48 compute-0 nova_compute[251290]: 2026-02-02 11:55:48.416 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:55:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:48 compute-0 ceph-mon[74676]: pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1662588183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:55:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3139858339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:55:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:48.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:49.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:49 compute-0 nova_compute[251290]: 2026-02-02 11:55:49.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:55:49 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:49 compute-0 nova_compute[251290]: 2026-02-02 11:55:49.200 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1303674666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:55:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1598632394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:55:49 compute-0 sudo[289605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:55:49 compute-0 sudo[289605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:49 compute-0 sudo[289605]: pam_unix(sudo:session): session closed for user root
Feb 02 11:55:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:50.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:50 compute-0 ceph-mon[74676]: pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2503864218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:55:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:50.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:51 compute-0 nova_compute[251290]: 2026-02-02 11:55:51.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:55:51 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:51 compute-0 nova_compute[251290]: 2026-02-02 11:55:51.388 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:52 compute-0 nova_compute[251290]: 2026-02-02 11:55:52.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:55:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:52.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:52 compute-0 ceph-mon[74676]: pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:55:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:52.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:53 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:53.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:54 compute-0 nova_compute[251290]: 2026-02-02 11:55:54.204 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:54.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:54 compute-0 ceph-mon[74676]: pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:55:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:54.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:55:55 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:55 compute-0 sudo[289635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:55:55 compute-0 sudo[289635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:55 compute-0 sudo[289635]: pam_unix(sudo:session): session closed for user root
Feb 02 11:55:55 compute-0 sudo[289660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:55:55 compute-0 sudo[289660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:55 compute-0 sudo[289660]: pam_unix(sudo:session): session closed for user root
Feb 02 11:55:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:55:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:55:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:55:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:55:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:55:56 compute-0 nova_compute[251290]: 2026-02-02 11:55:56.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:55:56 compute-0 nova_compute[251290]: 2026-02-02 11:55:56.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:55:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb 02 11:55:56 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 11:55:56 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:55:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:55:56 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:55:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:55:56 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:55:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:55:56 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:55:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:55:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:55:56 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:55:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:55:56 compute-0 sudo[289717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:55:56 compute-0 sudo[289717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:56 compute-0 sudo[289717]: pam_unix(sudo:session): session closed for user root
Feb 02 11:55:56 compute-0 sudo[289742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:55:56 compute-0 sudo[289742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:56 compute-0 nova_compute[251290]: 2026-02-02 11:55:56.390 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:56.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:56 compute-0 podman[289809]: 2026-02-02 11:55:56.57700458 +0000 UTC m=+0.048063883 container create 5317ab8a7895146b3032e156cb1509d9c8e7e2c97f14b52776a9a1ef8773adbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:55:56 compute-0 systemd[1]: Started libpod-conmon-5317ab8a7895146b3032e156cb1509d9c8e7e2c97f14b52776a9a1ef8773adbf.scope.
Feb 02 11:55:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:55:56 compute-0 podman[289809]: 2026-02-02 11:55:56.556000442 +0000 UTC m=+0.027059775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:55:56 compute-0 podman[289809]: 2026-02-02 11:55:56.664837924 +0000 UTC m=+0.135897247 container init 5317ab8a7895146b3032e156cb1509d9c8e7e2c97f14b52776a9a1ef8773adbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb 02 11:55:56 compute-0 podman[289809]: 2026-02-02 11:55:56.672942848 +0000 UTC m=+0.144002151 container start 5317ab8a7895146b3032e156cb1509d9c8e7e2c97f14b52776a9a1ef8773adbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb 02 11:55:56 compute-0 podman[289809]: 2026-02-02 11:55:56.676039858 +0000 UTC m=+0.147099191 container attach 5317ab8a7895146b3032e156cb1509d9c8e7e2c97f14b52776a9a1ef8773adbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_williamson, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:55:56 compute-0 systemd[1]: libpod-5317ab8a7895146b3032e156cb1509d9c8e7e2c97f14b52776a9a1ef8773adbf.scope: Deactivated successfully.
Feb 02 11:55:56 compute-0 vibrant_williamson[289826]: 167 167
Feb 02 11:55:56 compute-0 conmon[289826]: conmon 5317ab8a7895146b3032 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5317ab8a7895146b3032e156cb1509d9c8e7e2c97f14b52776a9a1ef8773adbf.scope/container/memory.events
Feb 02 11:55:56 compute-0 podman[289809]: 2026-02-02 11:55:56.682114844 +0000 UTC m=+0.153174137 container died 5317ab8a7895146b3032e156cb1509d9c8e7e2c97f14b52776a9a1ef8773adbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_williamson, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cd1fac195894b187b3dc3671fb95b4dfc485dafe2950a1b2390874f86799542-merged.mount: Deactivated successfully.
Feb 02 11:55:56 compute-0 podman[289809]: 2026-02-02 11:55:56.719563088 +0000 UTC m=+0.190622431 container remove 5317ab8a7895146b3032e156cb1509d9c8e7e2c97f14b52776a9a1ef8773adbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_williamson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:55:56 compute-0 ceph-mon[74676]: pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:55:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:55:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:55:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:55:56 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:55:56 compute-0 systemd[1]: libpod-conmon-5317ab8a7895146b3032e156cb1509d9c8e7e2c97f14b52776a9a1ef8773adbf.scope: Deactivated successfully.
Feb 02 11:55:56 compute-0 podman[289851]: 2026-02-02 11:55:56.913613298 +0000 UTC m=+0.035163540 container create 5c7d72603b51d48fab641586ed067a9a6b3311820703f3b8b945b84b1fda26e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb 02 11:55:56 compute-0 systemd[1]: Started libpod-conmon-5c7d72603b51d48fab641586ed067a9a6b3311820703f3b8b945b84b1fda26e0.scope.
Feb 02 11:55:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:56.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dccaed6339a6d92f034391050f1ae07f9867fcf538017eca1406082a8d9f722/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dccaed6339a6d92f034391050f1ae07f9867fcf538017eca1406082a8d9f722/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dccaed6339a6d92f034391050f1ae07f9867fcf538017eca1406082a8d9f722/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dccaed6339a6d92f034391050f1ae07f9867fcf538017eca1406082a8d9f722/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dccaed6339a6d92f034391050f1ae07f9867fcf538017eca1406082a8d9f722/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:56 compute-0 podman[289851]: 2026-02-02 11:55:56.989962819 +0000 UTC m=+0.111513081 container init 5c7d72603b51d48fab641586ed067a9a6b3311820703f3b8b945b84b1fda26e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 11:55:56 compute-0 podman[289851]: 2026-02-02 11:55:56.898422918 +0000 UTC m=+0.019973180 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:55:56 compute-0 podman[289851]: 2026-02-02 11:55:56.996540689 +0000 UTC m=+0.118090931 container start 5c7d72603b51d48fab641586ed067a9a6b3311820703f3b8b945b84b1fda26e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:55:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:56] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:55:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:55:56] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb 02 11:55:57 compute-0 podman[289851]: 2026-02-02 11:55:57.000568766 +0000 UTC m=+0.122119038 container attach 5c7d72603b51d48fab641586ed067a9a6b3311820703f3b8b945b84b1fda26e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_elbakyan, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:55:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:55:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:57.273Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:57 compute-0 gracious_elbakyan[289867]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:55:57 compute-0 gracious_elbakyan[289867]: --> All data devices are unavailable
Feb 02 11:55:57 compute-0 systemd[1]: libpod-5c7d72603b51d48fab641586ed067a9a6b3311820703f3b8b945b84b1fda26e0.scope: Deactivated successfully.
Feb 02 11:55:57 compute-0 podman[289851]: 2026-02-02 11:55:57.325768863 +0000 UTC m=+0.447319125 container died 5c7d72603b51d48fab641586ed067a9a6b3311820703f3b8b945b84b1fda26e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_elbakyan, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb 02 11:55:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dccaed6339a6d92f034391050f1ae07f9867fcf538017eca1406082a8d9f722-merged.mount: Deactivated successfully.
Feb 02 11:55:57 compute-0 podman[289851]: 2026-02-02 11:55:57.372348821 +0000 UTC m=+0.493899063 container remove 5c7d72603b51d48fab641586ed067a9a6b3311820703f3b8b945b84b1fda26e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_elbakyan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:55:57 compute-0 systemd[1]: libpod-conmon-5c7d72603b51d48fab641586ed067a9a6b3311820703f3b8b945b84b1fda26e0.scope: Deactivated successfully.
Feb 02 11:55:57 compute-0 sudo[289742]: pam_unix(sudo:session): session closed for user root
Feb 02 11:55:57 compute-0 sudo[289896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:55:57 compute-0 sudo[289896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:57 compute-0 sudo[289896]: pam_unix(sudo:session): session closed for user root
Feb 02 11:55:57 compute-0 sudo[289921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:55:57 compute-0 sudo[289921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:57 compute-0 ceph-mon[74676]: pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:55:57 compute-0 podman[289988]: 2026-02-02 11:55:57.885275805 +0000 UTC m=+0.038239239 container create 1ff6a0e78ffdc5e8b8bc7a74df7e9d2f68cee72e56162ecbe3a7960efbf8848c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 11:55:57 compute-0 systemd[1]: Started libpod-conmon-1ff6a0e78ffdc5e8b8bc7a74df7e9d2f68cee72e56162ecbe3a7960efbf8848c.scope.
Feb 02 11:55:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:55:57 compute-0 podman[289988]: 2026-02-02 11:55:57.961530413 +0000 UTC m=+0.114493867 container init 1ff6a0e78ffdc5e8b8bc7a74df7e9d2f68cee72e56162ecbe3a7960efbf8848c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:55:57 compute-0 podman[289988]: 2026-02-02 11:55:57.869896909 +0000 UTC m=+0.022860363 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:55:57 compute-0 podman[289988]: 2026-02-02 11:55:57.967925708 +0000 UTC m=+0.120889142 container start 1ff6a0e78ffdc5e8b8bc7a74df7e9d2f68cee72e56162ecbe3a7960efbf8848c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:55:57 compute-0 podman[289988]: 2026-02-02 11:55:57.971249094 +0000 UTC m=+0.124212528 container attach 1ff6a0e78ffdc5e8b8bc7a74df7e9d2f68cee72e56162ecbe3a7960efbf8848c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:55:57 compute-0 hopeful_mcclintock[290005]: 167 167
Feb 02 11:55:57 compute-0 systemd[1]: libpod-1ff6a0e78ffdc5e8b8bc7a74df7e9d2f68cee72e56162ecbe3a7960efbf8848c.scope: Deactivated successfully.
Feb 02 11:55:57 compute-0 conmon[290005]: conmon 1ff6a0e78ffdc5e8b8bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ff6a0e78ffdc5e8b8bc7a74df7e9d2f68cee72e56162ecbe3a7960efbf8848c.scope/container/memory.events
Feb 02 11:55:57 compute-0 podman[289988]: 2026-02-02 11:55:57.975432725 +0000 UTC m=+0.128396159 container died 1ff6a0e78ffdc5e8b8bc7a74df7e9d2f68cee72e56162ecbe3a7960efbf8848c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dbd8119be0f4a5c61e512bec302b4c2b502581fa06ccadc2f7df437d7685bb5-merged.mount: Deactivated successfully.
Feb 02 11:55:58 compute-0 podman[289988]: 2026-02-02 11:55:58.015275029 +0000 UTC m=+0.168238473 container remove 1ff6a0e78ffdc5e8b8bc7a74df7e9d2f68cee72e56162ecbe3a7960efbf8848c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb 02 11:55:58 compute-0 systemd[1]: libpod-conmon-1ff6a0e78ffdc5e8b8bc7a74df7e9d2f68cee72e56162ecbe3a7960efbf8848c.scope: Deactivated successfully.
Feb 02 11:55:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:55:58 compute-0 podman[290029]: 2026-02-02 11:55:58.142793312 +0000 UTC m=+0.034758858 container create 74210a1196bf039afcb3d863b4493a14a167242ba7a2ef7ace11fe71de880ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb 02 11:55:58 compute-0 systemd[1]: Started libpod-conmon-74210a1196bf039afcb3d863b4493a14a167242ba7a2ef7ace11fe71de880ae3.scope.
Feb 02 11:55:58 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d83181d20cc01becccb5a93f2c42721173379d0714b19616c0ad022172debe3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d83181d20cc01becccb5a93f2c42721173379d0714b19616c0ad022172debe3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d83181d20cc01becccb5a93f2c42721173379d0714b19616c0ad022172debe3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d83181d20cc01becccb5a93f2c42721173379d0714b19616c0ad022172debe3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:58 compute-0 podman[290029]: 2026-02-02 11:55:58.128333563 +0000 UTC m=+0.020299129 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:55:58 compute-0 podman[290029]: 2026-02-02 11:55:58.234508337 +0000 UTC m=+0.126473903 container init 74210a1196bf039afcb3d863b4493a14a167242ba7a2ef7ace11fe71de880ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_murdock, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:55:58 compute-0 podman[290029]: 2026-02-02 11:55:58.240868172 +0000 UTC m=+0.132833718 container start 74210a1196bf039afcb3d863b4493a14a167242ba7a2ef7ace11fe71de880ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_murdock, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:55:58 compute-0 podman[290029]: 2026-02-02 11:55:58.243849838 +0000 UTC m=+0.135815384 container attach 74210a1196bf039afcb3d863b4493a14a167242ba7a2ef7ace11fe71de880ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]: {
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:     "1": [
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:         {
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "devices": [
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "/dev/loop3"
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             ],
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "lv_name": "ceph_lv0",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "lv_size": "21470642176",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "name": "ceph_lv0",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "tags": {
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.cluster_name": "ceph",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.crush_device_class": "",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.encrypted": "0",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.osd_id": "1",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.type": "block",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.vdo": "0",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:                 "ceph.with_tpm": "0"
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             },
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "type": "block",
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:             "vg_name": "ceph_vg0"
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:         }
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]:     ]
Feb 02 11:55:58 compute-0 dazzling_murdock[290046]: }
Feb 02 11:55:58 compute-0 systemd[1]: libpod-74210a1196bf039afcb3d863b4493a14a167242ba7a2ef7ace11fe71de880ae3.scope: Deactivated successfully.
Feb 02 11:55:58 compute-0 podman[290029]: 2026-02-02 11:55:58.526827632 +0000 UTC m=+0.418793208 container died 74210a1196bf039afcb3d863b4493a14a167242ba7a2ef7ace11fe71de880ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_murdock, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 11:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d83181d20cc01becccb5a93f2c42721173379d0714b19616c0ad022172debe3-merged.mount: Deactivated successfully.
Feb 02 11:55:58 compute-0 podman[290029]: 2026-02-02 11:55:58.562140575 +0000 UTC m=+0.454106121 container remove 74210a1196bf039afcb3d863b4493a14a167242ba7a2ef7ace11fe71de880ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_murdock, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb 02 11:55:58 compute-0 systemd[1]: libpod-conmon-74210a1196bf039afcb3d863b4493a14a167242ba7a2ef7ace11fe71de880ae3.scope: Deactivated successfully.
Feb 02 11:55:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:55:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:55:58.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:55:58 compute-0 sudo[289921]: pam_unix(sudo:session): session closed for user root
Feb 02 11:55:58 compute-0 sudo[290067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:55:58 compute-0 sudo[290067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:58 compute-0 sudo[290067]: pam_unix(sudo:session): session closed for user root
Feb 02 11:55:58 compute-0 sudo[290092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:55:58 compute-0 sudo[290092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:55:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:55:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:55:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:55:58.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:55:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:55:59.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:55:59 compute-0 nova_compute[251290]: 2026-02-02 11:55:59.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:55:59 compute-0 nova_compute[251290]: 2026-02-02 11:55:59.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:55:59 compute-0 nova_compute[251290]: 2026-02-02 11:55:59.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:55:59 compute-0 nova_compute[251290]: 2026-02-02 11:55:59.037 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:55:59 compute-0 podman[290158]: 2026-02-02 11:55:59.066680374 +0000 UTC m=+0.042707088 container create 58562690692cbca3bce27aaa84c19fc5bdbb484f8c866e44e33ed4bf7ef2ec14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mcnulty, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb 02 11:55:59 compute-0 systemd[1]: Started libpod-conmon-58562690692cbca3bce27aaa84c19fc5bdbb484f8c866e44e33ed4bf7ef2ec14.scope.
Feb 02 11:55:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:55:59 compute-0 podman[290158]: 2026-02-02 11:55:59.137505375 +0000 UTC m=+0.113532109 container init 58562690692cbca3bce27aaa84c19fc5bdbb484f8c866e44e33ed4bf7ef2ec14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb 02 11:55:59 compute-0 podman[290158]: 2026-02-02 11:55:59.048725924 +0000 UTC m=+0.024752668 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:55:59 compute-0 podman[290158]: 2026-02-02 11:55:59.143507619 +0000 UTC m=+0.119534333 container start 58562690692cbca3bce27aaa84c19fc5bdbb484f8c866e44e33ed4bf7ef2ec14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mcnulty, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb 02 11:55:59 compute-0 podman[290158]: 2026-02-02 11:55:59.147084802 +0000 UTC m=+0.123111536 container attach 58562690692cbca3bce27aaa84c19fc5bdbb484f8c866e44e33ed4bf7ef2ec14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Feb 02 11:55:59 compute-0 ecstatic_mcnulty[290174]: 167 167
Feb 02 11:55:59 compute-0 systemd[1]: libpod-58562690692cbca3bce27aaa84c19fc5bdbb484f8c866e44e33ed4bf7ef2ec14.scope: Deactivated successfully.
Feb 02 11:55:59 compute-0 podman[290158]: 2026-02-02 11:55:59.151205602 +0000 UTC m=+0.127232346 container died 58562690692cbca3bce27aaa84c19fc5bdbb484f8c866e44e33ed4bf7ef2ec14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Feb 02 11:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-93925550045d86131275668828aba63ebc4947b949cca6483770cec54335db09-merged.mount: Deactivated successfully.
Feb 02 11:55:59 compute-0 podman[290158]: 2026-02-02 11:55:59.187953166 +0000 UTC m=+0.163979880 container remove 58562690692cbca3bce27aaa84c19fc5bdbb484f8c866e44e33ed4bf7ef2ec14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mcnulty, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:55:59 compute-0 systemd[1]: libpod-conmon-58562690692cbca3bce27aaa84c19fc5bdbb484f8c866e44e33ed4bf7ef2ec14.scope: Deactivated successfully.
Feb 02 11:55:59 compute-0 nova_compute[251290]: 2026-02-02 11:55:59.208 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:55:59 compute-0 podman[290197]: 2026-02-02 11:55:59.318462895 +0000 UTC m=+0.038216228 container create 92f0de62d947806300d89eab8729f59a321ba1dd598e3c309b45d9e236920d15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brown, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:55:59 compute-0 systemd[1]: Started libpod-conmon-92f0de62d947806300d89eab8729f59a321ba1dd598e3c309b45d9e236920d15.scope.
Feb 02 11:55:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33f8e293ca814a82f6e1a1266576aa57740a407c71356320766498101381d21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33f8e293ca814a82f6e1a1266576aa57740a407c71356320766498101381d21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33f8e293ca814a82f6e1a1266576aa57740a407c71356320766498101381d21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33f8e293ca814a82f6e1a1266576aa57740a407c71356320766498101381d21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:55:59 compute-0 podman[290197]: 2026-02-02 11:55:59.380159802 +0000 UTC m=+0.099913145 container init 92f0de62d947806300d89eab8729f59a321ba1dd598e3c309b45d9e236920d15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:55:59 compute-0 podman[290197]: 2026-02-02 11:55:59.386587418 +0000 UTC m=+0.106340751 container start 92f0de62d947806300d89eab8729f59a321ba1dd598e3c309b45d9e236920d15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb 02 11:55:59 compute-0 podman[290197]: 2026-02-02 11:55:59.389979896 +0000 UTC m=+0.109733229 container attach 92f0de62d947806300d89eab8729f59a321ba1dd598e3c309b45d9e236920d15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brown, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb 02 11:55:59 compute-0 podman[290197]: 2026-02-02 11:55:59.30101325 +0000 UTC m=+0.020766603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:55:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:55:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:55:59 compute-0 ceph-mon[74676]: pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:55:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:55:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:55:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:55:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:55:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:55:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:55:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:55:59 compute-0 lvm[290289]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:55:59 compute-0 lvm[290289]: VG ceph_vg0 finished
Feb 02 11:56:00 compute-0 strange_brown[290214]: {}
Feb 02 11:56:00 compute-0 systemd[1]: libpod-92f0de62d947806300d89eab8729f59a321ba1dd598e3c309b45d9e236920d15.scope: Deactivated successfully.
Feb 02 11:56:00 compute-0 podman[290197]: 2026-02-02 11:56:00.062699766 +0000 UTC m=+0.782453119 container died 92f0de62d947806300d89eab8729f59a321ba1dd598e3c309b45d9e236920d15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brown, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb 02 11:56:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f33f8e293ca814a82f6e1a1266576aa57740a407c71356320766498101381d21-merged.mount: Deactivated successfully.
Feb 02 11:56:00 compute-0 podman[290197]: 2026-02-02 11:56:00.103300462 +0000 UTC m=+0.823053795 container remove 92f0de62d947806300d89eab8729f59a321ba1dd598e3c309b45d9e236920d15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brown, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:56:00 compute-0 systemd[1]: libpod-conmon-92f0de62d947806300d89eab8729f59a321ba1dd598e3c309b45d9e236920d15.scope: Deactivated successfully.
Feb 02 11:56:00 compute-0 sudo[290092]: pam_unix(sudo:session): session closed for user root
Feb 02 11:56:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:56:00 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:56:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:56:00 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:56:00 compute-0 sudo[290303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:56:00 compute-0 sudo[290303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:56:00 compute-0 sudo[290303]: pam_unix(sudo:session): session closed for user root
Feb 02 11:56:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:00.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:00.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:01 compute-0 ceph-mon[74676]: pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:56:01 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:56:01 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:56:01 compute-0 nova_compute[251290]: 2026-02-02 11:56:01.552 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:56:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:02.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:02.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:03 compute-0 ceph-mon[74676]: pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:56:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:03.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:04 compute-0 nova_compute[251290]: 2026-02-02 11:56:04.031 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:56:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:56:04 compute-0 nova_compute[251290]: 2026-02-02 11:56:04.241 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:04.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:04.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:05 compute-0 ceph-mon[74676]: pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:56:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:56:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:06.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:06 compute-0 nova_compute[251290]: 2026-02-02 11:56:06.695 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:06.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:06] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:56:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:06] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:56:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:07 compute-0 ceph-mon[74676]: pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Feb 02 11:56:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:07.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:08.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:08.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:09.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:09 compute-0 ceph-mon[74676]: pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:09 compute-0 nova_compute[251290]: 2026-02-02 11:56:09.244 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:09 compute-0 sudo[290338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:56:09 compute-0 sudo[290338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:56:09 compute-0 sudo[290338]: pam_unix(sudo:session): session closed for user root
Feb 02 11:56:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:10.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:10.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:11 compute-0 ceph-mon[74676]: pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:11 compute-0 podman[290364]: 2026-02-02 11:56:11.265521389 +0000 UTC m=+0.050551335 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 02 11:56:11 compute-0 podman[290365]: 2026-02-02 11:56:11.316011261 +0000 UTC m=+0.100359007 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 02 11:56:11 compute-0 nova_compute[251290]: 2026-02-02 11:56:11.695 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:12.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:12.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:13 compute-0 ceph-mon[74676]: pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:13.587Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:14 compute-0 nova_compute[251290]: 2026-02-02 11:56:14.249 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:14.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:56:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:56:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:14.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:15 compute-0 ceph-mon[74676]: pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:15 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:56:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:16.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:16 compute-0 nova_compute[251290]: 2026-02-02 11:56:16.737 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:16.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:16] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:56:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:16] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:56:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:17.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:17 compute-0 ceph-mon[74676]: pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:18.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:18.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:19.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:19 compute-0 nova_compute[251290]: 2026-02-02 11:56:19.296 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:19 compute-0 ceph-mon[74676]: pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:20.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:20.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:21 compute-0 ceph-mon[74676]: pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:21 compute-0 nova_compute[251290]: 2026-02-02 11:56:21.777 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:56:22.699 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:56:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:56:22.700 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:56:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:56:22.700 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:56:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:22.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:23 compute-0 ceph-mon[74676]: pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:23.588Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:24 compute-0 nova_compute[251290]: 2026-02-02 11:56:24.344 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:24.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:24.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:25 compute-0 ceph-mon[74676]: pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:56:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:26.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:26 compute-0 nova_compute[251290]: 2026-02-02 11:56:26.834 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:26] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:56:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:26] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:56:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:27.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:27.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:56:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:27.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:56:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:27.276Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:56:27 compute-0 ceph-mon[74676]: pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:56:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:56:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:28.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:29.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:29.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:29 compute-0 nova_compute[251290]: 2026-02-02 11:56:29.402 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:56:29
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'volumes', '.mgr', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log']
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:56:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:56:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:56:29 compute-0 sudo[290429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:56:29 compute-0 sudo[290429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:56:29 compute-0 sudo[290429]: pam_unix(sudo:session): session closed for user root
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:56:29 compute-0 ceph-mon[74676]: pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:56:29 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:56:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:56:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:56:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:30.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:30 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:31.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:31 compute-0 nova_compute[251290]: 2026-02-02 11:56:31.835 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:56:32 compute-0 ceph-mon[74676]: pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:56:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:32.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:33.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:33 compute-0 ceph-mon[74676]: pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:56:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:33.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:56:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:33.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:56:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:33.590Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:56:34 compute-0 nova_compute[251290]: 2026-02-02 11:56:34.405 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:34.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:35.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:35 compute-0 ceph-mon[74676]: pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Feb 02 11:56:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:56:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:36.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:36 compute-0 nova_compute[251290]: 2026-02-02 11:56:36.850 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:36] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb 02 11:56:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:36] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb 02 11:56:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:37.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:37.277Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:37 compute-0 ceph-mon[74676]: pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:56:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:38.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:39.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:39.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:39 compute-0 nova_compute[251290]: 2026-02-02 11:56:39.408 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:39 compute-0 ceph-mon[74676]: pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:40.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:41.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:41 compute-0 ceph-mon[74676]: pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:41 compute-0 nova_compute[251290]: 2026-02-02 11:56:41.852 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:42 compute-0 podman[290466]: 2026-02-02 11:56:42.262447221 +0000 UTC m=+0.049749292 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 11:56:42 compute-0 podman[290467]: 2026-02-02 11:56:42.282626855 +0000 UTC m=+0.068542476 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:56:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:42.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:43.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:43 compute-0 ceph-mon[74676]: pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:43.590Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:56:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:43.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:44 compute-0 nova_compute[251290]: 2026-02-02 11:56:44.411 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/261499459' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:56:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/261499459' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:56:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:56:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:56:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:44.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:45.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:45 compute-0 ceph-mon[74676]: pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:56:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.050 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.050 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.050 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.051 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.051 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:56:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:56:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515146626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.491 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:56:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1515146626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.616 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.617 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4500MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.617 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.618 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:56:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:46.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.690 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.691 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.713 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:56:46 compute-0 nova_compute[251290]: 2026-02-02 11:56:46.853 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:46] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:56:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:46] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:56:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:47.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:56:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569952746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:56:47 compute-0 nova_compute[251290]: 2026-02-02 11:56:47.175 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:56:47 compute-0 nova_compute[251290]: 2026-02-02 11:56:47.180 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:56:47 compute-0 nova_compute[251290]: 2026-02-02 11:56:47.193 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:56:47 compute-0 nova_compute[251290]: 2026-02-02 11:56:47.195 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:56:47 compute-0 nova_compute[251290]: 2026-02-02 11:56:47.195 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:56:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:47.278Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:56:47 compute-0 ceph-mon[74676]: pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3569952746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:56:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:48.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:49.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:56:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:49.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:56:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:49.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:49 compute-0 nova_compute[251290]: 2026-02-02 11:56:49.196 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:56:49 compute-0 nova_compute[251290]: 2026-02-02 11:56:49.197 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:56:49 compute-0 nova_compute[251290]: 2026-02-02 11:56:49.197 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:56:49 compute-0 nova_compute[251290]: 2026-02-02 11:56:49.197 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:56:49 compute-0 nova_compute[251290]: 2026-02-02 11:56:49.414 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:49 compute-0 ceph-mon[74676]: pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:49 compute-0 sudo[290561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:56:49 compute-0 sudo[290561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:56:49 compute-0 sudo[290561]: pam_unix(sudo:session): session closed for user root
Feb 02 11:56:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2126395302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:56:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:50.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:50 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:51 compute-0 nova_compute[251290]: 2026-02-02 11:56:51.013 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:56:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:51.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:51 compute-0 ceph-mon[74676]: pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2216181231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:56:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1793814847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:56:51 compute-0 nova_compute[251290]: 2026-02-02 11:56:51.854 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3509802831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:56:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:52.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:53 compute-0 nova_compute[251290]: 2026-02-02 11:56:53.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:56:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:53.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:53.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:53 compute-0 ceph-mon[74676]: pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:54 compute-0 nova_compute[251290]: 2026-02-02 11:56:54.419 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:54.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:55.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:55 compute-0 ceph-mon[74676]: pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:56:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:56:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:56:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:56:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:56:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000058s ======
Feb 02 11:56:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:56.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Feb 02 11:56:56 compute-0 nova_compute[251290]: 2026-02-02 11:56:56.859 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:56] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:56:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:56:56] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb 02 11:56:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:57.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:56:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:57.279Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:57 compute-0 ceph-mon[74676]: pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:56:58 compute-0 nova_compute[251290]: 2026-02-02 11:56:58.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:56:58 compute-0 nova_compute[251290]: 2026-02-02 11:56:58.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:56:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:56:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:56:58.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:56:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:56:59.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:56:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:56:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:56:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:56:59.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:56:59 compute-0 nova_compute[251290]: 2026-02-02 11:56:59.423 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:56:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:56:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:56:59 compute-0 ceph-mon[74676]: pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:56:59 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:56:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:56:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:56:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:56:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:56:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:56:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:57:00 compute-0 nova_compute[251290]: 2026-02-02 11:57:00.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:57:00 compute-0 nova_compute[251290]: 2026-02-02 11:57:00.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:57:00 compute-0 nova_compute[251290]: 2026-02-02 11:57:00.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:57:00 compute-0 nova_compute[251290]: 2026-02-02 11:57:00.040 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:57:00 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:00 compute-0 sudo[290596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:57:00 compute-0 sudo[290596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:00 compute-0 sudo[290596]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:00 compute-0 sudo[290621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Feb 02 11:57:00 compute-0 sudo[290621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:00.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:00 compute-0 sudo[290621]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:00 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:01.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:01 compute-0 nova_compute[251290]: 2026-02-02 11:57:01.857 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:01 compute-0 ceph-mon[74676]: pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb 02 11:57:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb 02 11:57:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:02.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb 02 11:57:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:57:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 11:57:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:57:02 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 11:57:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb 02 11:57:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 11:57:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 11:57:02 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:57:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:57:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:57:02 compute-0 sudo[290680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:57:02 compute-0 sudo[290680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:02 compute-0 sudo[290680]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:02 compute-0 sudo[290705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Feb 02 11:57:02 compute-0 sudo[290705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:03.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:03 compute-0 podman[290772]: 2026-02-02 11:57:03.302517141 +0000 UTC m=+0.034998965 container create 4fcaeba991165e6936e18649a06e76326a181db13f5c3a0adc58450cbf5188f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:57:03 compute-0 systemd[1]: Started libpod-conmon-4fcaeba991165e6936e18649a06e76326a181db13f5c3a0adc58450cbf5188f5.scope.
Feb 02 11:57:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:57:03 compute-0 podman[290772]: 2026-02-02 11:57:03.36257357 +0000 UTC m=+0.095055414 container init 4fcaeba991165e6936e18649a06e76326a181db13f5c3a0adc58450cbf5188f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 11:57:03 compute-0 ceph-mon[74676]: pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb 02 11:57:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:57:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb 02 11:57:03 compute-0 ceph-mon[74676]: pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:57:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb 02 11:57:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb 02 11:57:03 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:57:03 compute-0 podman[290772]: 2026-02-02 11:57:03.369705887 +0000 UTC m=+0.102187721 container start 4fcaeba991165e6936e18649a06e76326a181db13f5c3a0adc58450cbf5188f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:57:03 compute-0 cranky_babbage[290788]: 167 167
Feb 02 11:57:03 compute-0 systemd[1]: libpod-4fcaeba991165e6936e18649a06e76326a181db13f5c3a0adc58450cbf5188f5.scope: Deactivated successfully.
Feb 02 11:57:03 compute-0 podman[290772]: 2026-02-02 11:57:03.376970597 +0000 UTC m=+0.109452501 container attach 4fcaeba991165e6936e18649a06e76326a181db13f5c3a0adc58450cbf5188f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:57:03 compute-0 podman[290772]: 2026-02-02 11:57:03.377333177 +0000 UTC m=+0.109815001 container died 4fcaeba991165e6936e18649a06e76326a181db13f5c3a0adc58450cbf5188f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 11:57:03 compute-0 podman[290772]: 2026-02-02 11:57:03.287960459 +0000 UTC m=+0.020442313 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-573fa8a87995bc693e5351cbaf8bb38b329946f39641bb2080b47f124a24ec41-merged.mount: Deactivated successfully.
Feb 02 11:57:03 compute-0 podman[290772]: 2026-02-02 11:57:03.41369048 +0000 UTC m=+0.146172314 container remove 4fcaeba991165e6936e18649a06e76326a181db13f5c3a0adc58450cbf5188f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Feb 02 11:57:03 compute-0 systemd[1]: libpod-conmon-4fcaeba991165e6936e18649a06e76326a181db13f5c3a0adc58450cbf5188f5.scope: Deactivated successfully.
Feb 02 11:57:03 compute-0 podman[290810]: 2026-02-02 11:57:03.562234102 +0000 UTC m=+0.047435495 container create ca474799b1aa64b6103ee47be06f2c0c8c1652d04554f04a4227edebe5a7bda1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_diffie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:57:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:03.593Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:03 compute-0 systemd[1]: Started libpod-conmon-ca474799b1aa64b6103ee47be06f2c0c8c1652d04554f04a4227edebe5a7bda1.scope.
Feb 02 11:57:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b77d5632675400d6b5b8da62d2e488175a7460eadaaae4fed86017bf9771f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b77d5632675400d6b5b8da62d2e488175a7460eadaaae4fed86017bf9771f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b77d5632675400d6b5b8da62d2e488175a7460eadaaae4fed86017bf9771f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b77d5632675400d6b5b8da62d2e488175a7460eadaaae4fed86017bf9771f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b77d5632675400d6b5b8da62d2e488175a7460eadaaae4fed86017bf9771f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:03 compute-0 podman[290810]: 2026-02-02 11:57:03.543833929 +0000 UTC m=+0.029035362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:57:03 compute-0 podman[290810]: 2026-02-02 11:57:03.667644644 +0000 UTC m=+0.152846057 container init ca474799b1aa64b6103ee47be06f2c0c8c1652d04554f04a4227edebe5a7bda1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:57:03 compute-0 podman[290810]: 2026-02-02 11:57:03.675974505 +0000 UTC m=+0.161175898 container start ca474799b1aa64b6103ee47be06f2c0c8c1652d04554f04a4227edebe5a7bda1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_diffie, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:57:03 compute-0 podman[290810]: 2026-02-02 11:57:03.679872988 +0000 UTC m=+0.165074381 container attach ca474799b1aa64b6103ee47be06f2c0c8c1652d04554f04a4227edebe5a7bda1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_diffie, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 11:57:03 compute-0 naughty_diffie[290826]: --> passed data devices: 0 physical, 1 LVM
Feb 02 11:57:03 compute-0 naughty_diffie[290826]: --> All data devices are unavailable
Feb 02 11:57:03 compute-0 systemd[1]: libpod-ca474799b1aa64b6103ee47be06f2c0c8c1652d04554f04a4227edebe5a7bda1.scope: Deactivated successfully.
Feb 02 11:57:03 compute-0 podman[290810]: 2026-02-02 11:57:03.991577804 +0000 UTC m=+0.476779217 container died ca474799b1aa64b6103ee47be06f2c0c8c1652d04554f04a4227edebe5a7bda1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_diffie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7b77d5632675400d6b5b8da62d2e488175a7460eadaaae4fed86017bf9771f5-merged.mount: Deactivated successfully.
Feb 02 11:57:04 compute-0 podman[290810]: 2026-02-02 11:57:04.030604424 +0000 UTC m=+0.515805817 container remove ca474799b1aa64b6103ee47be06f2c0c8c1652d04554f04a4227edebe5a7bda1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_diffie, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb 02 11:57:04 compute-0 systemd[1]: libpod-conmon-ca474799b1aa64b6103ee47be06f2c0c8c1652d04554f04a4227edebe5a7bda1.scope: Deactivated successfully.
Feb 02 11:57:04 compute-0 sudo[290705]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:04 compute-0 sudo[290856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:57:04 compute-0 sudo[290856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:04 compute-0 sudo[290856]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:04 compute-0 sudo[290881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- lvm list --format json
Feb 02 11:57:04 compute-0 sudo[290881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:04 compute-0 nova_compute[251290]: 2026-02-02 11:57:04.424 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:04 compute-0 podman[290948]: 2026-02-02 11:57:04.571392494 +0000 UTC m=+0.036923660 container create 48eff36a917f24396f6ea5b1edf8abb0eb476df8539853fce88916a3c6dad556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 11:57:04 compute-0 systemd[1]: Started libpod-conmon-48eff36a917f24396f6ea5b1edf8abb0eb476df8539853fce88916a3c6dad556.scope.
Feb 02 11:57:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:57:04 compute-0 podman[290948]: 2026-02-02 11:57:04.642956177 +0000 UTC m=+0.108487353 container init 48eff36a917f24396f6ea5b1edf8abb0eb476df8539853fce88916a3c6dad556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lamport, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:57:04 compute-0 podman[290948]: 2026-02-02 11:57:04.648779845 +0000 UTC m=+0.114311011 container start 48eff36a917f24396f6ea5b1edf8abb0eb476df8539853fce88916a3c6dad556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lamport, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb 02 11:57:04 compute-0 podman[290948]: 2026-02-02 11:57:04.553401283 +0000 UTC m=+0.018932499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:57:04 compute-0 podman[290948]: 2026-02-02 11:57:04.651818493 +0000 UTC m=+0.117349659 container attach 48eff36a917f24396f6ea5b1edf8abb0eb476df8539853fce88916a3c6dad556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 11:57:04 compute-0 frosty_lamport[290965]: 167 167
Feb 02 11:57:04 compute-0 systemd[1]: libpod-48eff36a917f24396f6ea5b1edf8abb0eb476df8539853fce88916a3c6dad556.scope: Deactivated successfully.
Feb 02 11:57:04 compute-0 podman[290948]: 2026-02-02 11:57:04.65653831 +0000 UTC m=+0.122069476 container died 48eff36a917f24396f6ea5b1edf8abb0eb476df8539853fce88916a3c6dad556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lamport, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:57:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:04.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7173ca588ede7d19ee91ed2140800681f11b364e1f5a7364094e780eca5dbf1-merged.mount: Deactivated successfully.
Feb 02 11:57:04 compute-0 podman[290948]: 2026-02-02 11:57:04.687890238 +0000 UTC m=+0.153421404 container remove 48eff36a917f24396f6ea5b1edf8abb0eb476df8539853fce88916a3c6dad556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lamport, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:57:04 compute-0 systemd[1]: libpod-conmon-48eff36a917f24396f6ea5b1edf8abb0eb476df8539853fce88916a3c6dad556.scope: Deactivated successfully.
Feb 02 11:57:04 compute-0 podman[290989]: 2026-02-02 11:57:04.811713993 +0000 UTC m=+0.034268803 container create 4674f0bdc0a2283e0732c32feb5ac29081dbbe088c16af85751c4f5008b45135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 11:57:04 compute-0 systemd[1]: Started libpod-conmon-4674f0bdc0a2283e0732c32feb5ac29081dbbe088c16af85751c4f5008b45135.scope.
Feb 02 11:57:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dce8289aeb21fee6a2b00d275f334a4f60ee95e9ab9e7a6a513e090b2f924bc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dce8289aeb21fee6a2b00d275f334a4f60ee95e9ab9e7a6a513e090b2f924bc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dce8289aeb21fee6a2b00d275f334a4f60ee95e9ab9e7a6a513e090b2f924bc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dce8289aeb21fee6a2b00d275f334a4f60ee95e9ab9e7a6a513e090b2f924bc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:04 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:57:04 compute-0 podman[290989]: 2026-02-02 11:57:04.87999101 +0000 UTC m=+0.102545820 container init 4674f0bdc0a2283e0732c32feb5ac29081dbbe088c16af85751c4f5008b45135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 11:57:04 compute-0 podman[290989]: 2026-02-02 11:57:04.887768356 +0000 UTC m=+0.110323166 container start 4674f0bdc0a2283e0732c32feb5ac29081dbbe088c16af85751c4f5008b45135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:57:04 compute-0 podman[290989]: 2026-02-02 11:57:04.891911866 +0000 UTC m=+0.114466696 container attach 4674f0bdc0a2283e0732c32feb5ac29081dbbe088c16af85751c4f5008b45135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:57:04 compute-0 podman[290989]: 2026-02-02 11:57:04.798013797 +0000 UTC m=+0.020568637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:57:05 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:05 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:05 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:05.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:05 compute-0 amazing_swirles[291006]: {
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:     "1": [
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:         {
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "devices": [
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "/dev/loop3"
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             ],
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "lv_name": "ceph_lv0",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "lv_size": "21470642176",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1d33f80b-d6ca-501c-bac7-184379b89279,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ce0bc48-ed90-4057-9723-8baf8c87f572,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "lv_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "name": "ceph_lv0",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "tags": {
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.block_uuid": "9duxWF-hIdd-vQiL-Nxt7-gNKy-lbdZ-35UYuU",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.cluster_fsid": "1d33f80b-d6ca-501c-bac7-184379b89279",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.cluster_name": "ceph",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.crush_device_class": "",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.encrypted": "0",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.osd_fsid": "1ce0bc48-ed90-4057-9723-8baf8c87f572",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.osd_id": "1",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.type": "block",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.vdo": "0",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:                 "ceph.with_tpm": "0"
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             },
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "type": "block",
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:             "vg_name": "ceph_vg0"
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:         }
Feb 02 11:57:05 compute-0 amazing_swirles[291006]:     ]
Feb 02 11:57:05 compute-0 amazing_swirles[291006]: }
Feb 02 11:57:05 compute-0 systemd[1]: libpod-4674f0bdc0a2283e0732c32feb5ac29081dbbe088c16af85751c4f5008b45135.scope: Deactivated successfully.
Feb 02 11:57:05 compute-0 podman[290989]: 2026-02-02 11:57:05.139408803 +0000 UTC m=+0.361963613 container died 4674f0bdc0a2283e0732c32feb5ac29081dbbe088c16af85751c4f5008b45135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:57:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dce8289aeb21fee6a2b00d275f334a4f60ee95e9ab9e7a6a513e090b2f924bc4-merged.mount: Deactivated successfully.
Feb 02 11:57:05 compute-0 podman[290989]: 2026-02-02 11:57:05.175402115 +0000 UTC m=+0.397956925 container remove 4674f0bdc0a2283e0732c32feb5ac29081dbbe088c16af85751c4f5008b45135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:57:05 compute-0 systemd[1]: libpod-conmon-4674f0bdc0a2283e0732c32feb5ac29081dbbe088c16af85751c4f5008b45135.scope: Deactivated successfully.
Feb 02 11:57:05 compute-0 sudo[290881]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:05 compute-0 sudo[291028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 11:57:05 compute-0 sudo[291028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:05 compute-0 sudo[291028]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:05 compute-0 sudo[291053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1d33f80b-d6ca-501c-bac7-184379b89279/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1d33f80b-d6ca-501c-bac7-184379b89279 -- raw list --format json
Feb 02 11:57:05 compute-0 sudo[291053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:05 compute-0 podman[291119]: 2026-02-02 11:57:05.704670031 +0000 UTC m=+0.043971584 container create f043757e91662b068c3d40d59a2a1c051e9abc8edf393e92b4ef639a80092b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 11:57:05 compute-0 systemd[1]: Started libpod-conmon-f043757e91662b068c3d40d59a2a1c051e9abc8edf393e92b4ef639a80092b97.scope.
Feb 02 11:57:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:57:05 compute-0 podman[291119]: 2026-02-02 11:57:05.767351096 +0000 UTC m=+0.106652669 container init f043757e91662b068c3d40d59a2a1c051e9abc8edf393e92b4ef639a80092b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb 02 11:57:05 compute-0 podman[291119]: 2026-02-02 11:57:05.776062119 +0000 UTC m=+0.115363672 container start f043757e91662b068c3d40d59a2a1c051e9abc8edf393e92b4ef639a80092b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 11:57:05 compute-0 podman[291119]: 2026-02-02 11:57:05.683886469 +0000 UTC m=+0.023188052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:57:05 compute-0 heuristic_khayyam[291136]: 167 167
Feb 02 11:57:05 compute-0 systemd[1]: libpod-f043757e91662b068c3d40d59a2a1c051e9abc8edf393e92b4ef639a80092b97.scope: Deactivated successfully.
Feb 02 11:57:05 compute-0 podman[291119]: 2026-02-02 11:57:05.78267842 +0000 UTC m=+0.121980043 container attach f043757e91662b068c3d40d59a2a1c051e9abc8edf393e92b4ef639a80092b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:57:05 compute-0 conmon[291136]: conmon f043757e91662b068c3d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f043757e91662b068c3d40d59a2a1c051e9abc8edf393e92b4ef639a80092b97.scope/container/memory.events
Feb 02 11:57:05 compute-0 podman[291119]: 2026-02-02 11:57:05.784085601 +0000 UTC m=+0.123387164 container died f043757e91662b068c3d40d59a2a1c051e9abc8edf393e92b4ef639a80092b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:57:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-016d09882b4650f1f7573c3544cc01e5a0e53ff8581a3fceb343c32f04dab956-merged.mount: Deactivated successfully.
Feb 02 11:57:05 compute-0 podman[291119]: 2026-02-02 11:57:05.836433717 +0000 UTC m=+0.175735270 container remove f043757e91662b068c3d40d59a2a1c051e9abc8edf393e92b4ef639a80092b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb 02 11:57:05 compute-0 systemd[1]: libpod-conmon-f043757e91662b068c3d40d59a2a1c051e9abc8edf393e92b4ef639a80092b97.scope: Deactivated successfully.
Feb 02 11:57:05 compute-0 ceph-mon[74676]: pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:57:05 compute-0 podman[291162]: 2026-02-02 11:57:05.951723295 +0000 UTC m=+0.033196962 container create 213fafb9c548835c7b12a08411604a55493653dc0d5d4125548cc622bc857d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_benz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:57:05 compute-0 systemd[1]: Started libpod-conmon-213fafb9c548835c7b12a08411604a55493653dc0d5d4125548cc622bc857d04.scope.
Feb 02 11:57:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 11:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99f4bd3be3900a775a3567d6e0e8413f0176611258357fde49de72b30440afd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99f4bd3be3900a775a3567d6e0e8413f0176611258357fde49de72b30440afd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99f4bd3be3900a775a3567d6e0e8413f0176611258357fde49de72b30440afd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99f4bd3be3900a775a3567d6e0e8413f0176611258357fde49de72b30440afd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 11:57:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:05 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:06 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:06 compute-0 podman[291162]: 2026-02-02 11:57:06.012924027 +0000 UTC m=+0.094397724 container init 213fafb9c548835c7b12a08411604a55493653dc0d5d4125548cc622bc857d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_benz, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 11:57:06 compute-0 podman[291162]: 2026-02-02 11:57:06.018036795 +0000 UTC m=+0.099510462 container start 213fafb9c548835c7b12a08411604a55493653dc0d5d4125548cc622bc857d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_benz, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 11:57:06 compute-0 podman[291162]: 2026-02-02 11:57:06.020998971 +0000 UTC m=+0.102472648 container attach 213fafb9c548835c7b12a08411604a55493653dc0d5d4125548cc622bc857d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_benz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:57:06 compute-0 podman[291162]: 2026-02-02 11:57:05.938402929 +0000 UTC m=+0.019876616 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb 02 11:57:06 compute-0 lvm[291253]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:57:06 compute-0 lvm[291253]: VG ceph_vg0 finished
Feb 02 11:57:06 compute-0 stoic_benz[291178]: {}
Feb 02 11:57:06 compute-0 systemd[1]: libpod-213fafb9c548835c7b12a08411604a55493653dc0d5d4125548cc622bc857d04.scope: Deactivated successfully.
Feb 02 11:57:06 compute-0 podman[291162]: 2026-02-02 11:57:06.636552295 +0000 UTC m=+0.718025962 container died 213fafb9c548835c7b12a08411604a55493653dc0d5d4125548cc622bc857d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_benz, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb 02 11:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c99f4bd3be3900a775a3567d6e0e8413f0176611258357fde49de72b30440afd-merged.mount: Deactivated successfully.
Feb 02 11:57:06 compute-0 podman[291162]: 2026-02-02 11:57:06.67403317 +0000 UTC m=+0.755506847 container remove 213fafb9c548835c7b12a08411604a55493653dc0d5d4125548cc622bc857d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_benz, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 11:57:06 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:06 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:06 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:06.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:06 compute-0 systemd[1]: libpod-conmon-213fafb9c548835c7b12a08411604a55493653dc0d5d4125548cc622bc857d04.scope: Deactivated successfully.
Feb 02 11:57:06 compute-0 sudo[291053]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 11:57:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:06 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 11:57:06 compute-0 ceph-mon[74676]: log_channel(audit) log [INF] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:06 compute-0 sudo[291267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 11:57:06 compute-0 sudo[291267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:06 compute-0 sudo[291267]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:06 compute-0 nova_compute[251290]: 2026-02-02 11:57:06.859 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:06 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:57:06 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:06] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:57:06 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:06] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb 02 11:57:07 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:07 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:57:07 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:07.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:57:07 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:07.280Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:57:07 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:07.280Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:07 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' 
Feb 02 11:57:07 compute-0 ceph-mon[74676]: pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:57:08 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:08 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:08 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:08.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:08 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:57:09 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:09.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:09 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:09 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:09 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:09.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:09 compute-0 nova_compute[251290]: 2026-02-02 11:57:09.427 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:09 compute-0 ceph-mon[74676]: pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:57:10 compute-0 sudo[291296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:57:10 compute-0 sudo[291296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:10 compute-0 sudo[291296]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:10 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:10 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:10 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:10.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:10 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:57:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:10 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:11 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:11 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:11 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:11 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:11 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:11.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:11 compute-0 nova_compute[251290]: 2026-02-02 11:57:11.861 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:11 compute-0 ceph-mon[74676]: pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Feb 02 11:57:12 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:12 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:12 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:12 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:12.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:12 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:57:13 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:13 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:13 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:13.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:13 compute-0 podman[291324]: 2026-02-02 11:57:13.292618447 +0000 UTC m=+0.077289509 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:57:13 compute-0 podman[291325]: 2026-02-02 11:57:13.320499264 +0000 UTC m=+0.102866999 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 02 11:57:13 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:13.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:13 compute-0 ceph-mon[74676]: pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Feb 02 11:57:14 compute-0 nova_compute[251290]: 2026-02-02 11:57:14.430 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:14 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:57:14 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:57:14 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:14 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:14 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:14.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:14 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:14 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:57:15 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:15 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:15 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:15.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:15 compute-0 ceph-mon[74676]: pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:15 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:16 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:16 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:16 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:16 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:16 compute-0 nova_compute[251290]: 2026-02-02 11:57:16.864 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:16 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:16 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:16] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:57:16 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:16] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:57:17 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:17 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:17 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:17.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:17 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:17.281Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:57:17 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:17.281Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:57:18 compute-0 ceph-mon[74676]: pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:18 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:18 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:18 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:18 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:19 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:19.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:19 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:19 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:19 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:19.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:19 compute-0 nova_compute[251290]: 2026-02-02 11:57:19.434 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:20 compute-0 ceph-mon[74676]: pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:20 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:20 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:20 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:20 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:20 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:21 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:21 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:21 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:21 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:21 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:21.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:21 compute-0 nova_compute[251290]: 2026-02-02 11:57:21.865 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:22 compute-0 ceph-mon[74676]: pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:22 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:22 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:22 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:22 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:22.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:57:22.701 165304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:57:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:57:22.701 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:57:22 compute-0 ovn_metadata_agent[165299]: 2026-02-02 11:57:22.701 165304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:57:22 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:23 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:23 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.002000058s ======
Feb 02 11:57:23 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:23.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Feb 02 11:57:23 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:23.596Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:24 compute-0 ceph-mon[74676]: pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:24 compute-0 nova_compute[251290]: 2026-02-02 11:57:24.438 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:24 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:24 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:24 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:24.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:24 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:25 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:25 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:25 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:25.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:25 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:26 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:26 compute-0 ceph-mon[74676]: pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:26 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:26 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:26 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:26.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:26 compute-0 nova_compute[251290]: 2026-02-02 11:57:26.868 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:26 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:26 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:26] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:57:26 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:26] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb 02 11:57:27 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:27 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:27 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:27 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:27.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:27 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:27.283Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:28 compute-0 ceph-mon[74676]: pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:28 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:28 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:28 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:28.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:28 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:29 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:29.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:29 compute-0 ceph-mon[74676]: pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:29 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:29 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:29 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:29.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:29 compute-0 nova_compute[251290]: 2026-02-02 11:57:29.440 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Optimize plan auto_2026-02-02_11:57:29
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [balancer INFO root] do_upmap
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [balancer INFO root] pools ['.nfs', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 11:57:29 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:57:29 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:57:29 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:57:30 compute-0 sudo[291387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:57:30 compute-0 sudo[291387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:30 compute-0 sudo[291387]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:30 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 11:57:30 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:30 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:30 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:30.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:30 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:31 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:31 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:31 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:31 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:31 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:31.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:31 compute-0 ceph-mon[74676]: pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:31 compute-0 nova_compute[251290]: 2026-02-02 11:57:31.870 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:32 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:32 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:32 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:32 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:32.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:32 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:33 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:33 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:33 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:33.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:33.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:57:33 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:33.597Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:33 compute-0 ceph-mon[74676]: pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:34 compute-0 nova_compute[251290]: 2026-02-02 11:57:34.444 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:34 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:34 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:34 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:34.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:34 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:35 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:35 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:35 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:35.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:35 compute-0 ceph-mon[74676]: pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:35 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:36 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:36 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:36 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:36 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:36.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:36 compute-0 nova_compute[251290]: 2026-02-02 11:57:36.874 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:36 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:36 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:36] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb 02 11:57:36 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:36] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb 02 11:57:37 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:37 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:37 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:37 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:37.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:37 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:37.285Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:37 compute-0 sshd-session[291419]: Accepted publickey for zuul from 192.168.122.10 port 46598 ssh2: ECDSA SHA256:ozTirW0PAEglr4FSz02xqqnzqvmOYZwLEHk/ZObdJGU
Feb 02 11:57:37 compute-0 systemd-logind[793]: New session 59 of user zuul.
Feb 02 11:57:37 compute-0 systemd[1]: Started Session 59 of User zuul.
Feb 02 11:57:37 compute-0 sshd-session[291419]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 11:57:37 compute-0 sudo[291424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Feb 02 11:57:37 compute-0 sudo[291424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 11:57:37 compute-0 ceph-mon[74676]: pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:38 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:38 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:38 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:38.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:38 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:39 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:39.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:39 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:39 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:39 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:39.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:39 compute-0 nova_compute[251290]: 2026-02-02 11:57:39.454 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:39 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27562 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:39 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18054 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:40 compute-0 ceph-mon[74676]: pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:40 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28037 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:40 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27568 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:40 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18063 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:40 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28046 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:40 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:40 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:40 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:40.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Feb 02 11:57:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1770967695' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:57:40 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Feb 02 11:57:40 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/383343646' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:57:40 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:40 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:41 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:41 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:41 compute-0 ceph-mon[74676]: from='client.27562 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:41 compute-0 ceph-mon[74676]: from='client.18054 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:41 compute-0 ceph-mon[74676]: from='client.28037 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1770967695' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:57:41 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/383343646' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:57:41 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:41 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:41 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:41.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:41 compute-0 nova_compute[251290]: 2026-02-02 11:57:41.874 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:42 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:42 compute-0 ceph-mon[74676]: from='client.27568 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:42 compute-0 ceph-mon[74676]: from='client.18063 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:42 compute-0 ceph-mon[74676]: from='client.28046 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:42 compute-0 ceph-mon[74676]: pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:42 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3267428454' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb 02 11:57:42 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:42 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:42 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:42 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:43 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:43 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:43 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:43.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:43 compute-0 ceph-mon[74676]: pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:43 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:43.597Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:43 compute-0 ovs-vsctl[291743]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 02 11:57:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3980786724' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb 02 11:57:44 compute-0 ceph-mon[74676]: from='client.? 192.168.122.10:0/3980786724' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb 02 11:57:44 compute-0 podman[291783]: 2026-02-02 11:57:44.277485822 +0000 UTC m=+0.060436391 container health_status cce63785daa0f8eb5f137cd900d0333c89cef7ab8bb6348e11ae575acee47cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:57:44 compute-0 podman[291784]: 2026-02-02 11:57:44.339738194 +0000 UTC m=+0.123692242 container health_status daf63181ce0c980feb2e7897ba642596365a294cf84fe58b85ad3090545f5b2b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1de329f7564b5c146a8877ce41bb09e3a24cf4211f433ac2732df85c45f7f2aa-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f-142dbe5e0665ecf276f6355b876746fb6b2d2d6e83eb36544e116e560fe94a7f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 02 11:57:44 compute-0 nova_compute[251290]: 2026-02-02 11:57:44.456 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:44 compute-0 virtqemud[251949]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 02 11:57:44 compute-0 virtqemud[251949]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 02 11:57:44 compute-0 virtqemud[251949]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 02 11:57:44 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:57:44 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:57:44 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:44 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:44 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:44.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:44 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:45 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:45 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:45 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:45.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:45 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: cache status {prefix=cache status} (starting...)
Feb 02 11:57:45 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:45 compute-0 lvm[292121]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 11:57:45 compute-0 lvm[292121]: VG ceph_vg0 finished
Feb 02 11:57:45 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:57:45 compute-0 ceph-mon[74676]: pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:45 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: client ls {prefix=client ls} (starting...)
Feb 02 11:57:45 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:45 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27598 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb 02 11:57:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:57:45 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18105 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:45 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27613 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:45 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb 02 11:57:45 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142853206' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:57:45 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: damage ls {prefix=damage ls} (starting...)
Feb 02 11:57:45 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:45 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:46 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump loads {prefix=dump loads} (starting...)
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18120 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mon[74676]: from='client.27598 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3985840674' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mon[74676]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mon[74676]: from='client.18105 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mon[74676]: from='client.27613 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1142853206' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3595403861' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27625 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28082 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 11:57:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099895321' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:46 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb 02 11:57:46 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18141 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27643 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28106 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:46 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:46 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:46 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 02 11:57:46 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:46 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:46 compute-0 nova_compute[251290]: 2026-02-02 11:57:46.914 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:46 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18156 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:46 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:46] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb 02 11:57:46 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:46] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb 02 11:57:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:47 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:47 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:47 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:47.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:47 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28121 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: ops {prefix=ops} (starting...)
Feb 02 11:57:47 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 02 11:57:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025180531' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.18120 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.27625 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.28082 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1099895321' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4226939290' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3020499830' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.18141 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/622169855' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.27643 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.28106 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2912465006' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2378935299' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.18156 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3025180531' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3363115836' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:47.285Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:47 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27673 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 02 11:57:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/333025532' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18192 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28142 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27688 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: session ls {prefix=session ls} (starting...)
Feb 02 11:57:47 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg Can't run that command on an inactive MDS!
Feb 02 11:57:47 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 02 11:57:47 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062001351' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18216 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:47 compute-0 ceph-mds[96554]: mds.cephfs.compute-0.kwzngg asok_command: status {prefix=status} (starting...)
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.058 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.058 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.058 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.058 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.059 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:57:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb 02 11:57:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.28121 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2965908742' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.27673 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/333025532' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.18192 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.28142 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/293546592' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.27688 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4062001351' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.18216 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/861058366' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1415629152' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3250300671' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2361433629' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 02 11:57:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/777637141' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28175 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb 02 11:57:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240176658' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:57:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715939569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.538 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.664 251294 WARNING nova.virt.libvirt.driver [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.665 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4235MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.665 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.666 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 11:57:48 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:48 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:48 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.740 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.740 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 11:57:48 compute-0 nova_compute[251290]: 2026-02-02 11:57:48.763 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 11:57:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 02 11:57:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499515191' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28199 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:48 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:48 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 02 11:57:48 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807103297' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:49.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:57:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:49.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb 02 11:57:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:49.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:49 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:49 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:49 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:49.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:49 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27754 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mgr[74969]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:57:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:57:49.139+0000 7f3d02436640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:57:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 11:57:49 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921641374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:57:49 compute-0 nova_compute[251290]: 2026-02-02 11:57:49.205 251294 DEBUG oslo_concurrency.processutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 11:57:49 compute-0 nova_compute[251290]: 2026-02-02 11:57:49.210 251294 DEBUG nova.compute.provider_tree [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed in ProviderTree for provider: 92919e7b-7846-4645-9401-9fd55bbbf435 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 11:57:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 11:57:49 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2984491056' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:57:49 compute-0 nova_compute[251290]: 2026-02-02 11:57:49.245 251294 DEBUG nova.scheduler.client.report [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Inventory has not changed for provider 92919e7b-7846-4645-9401-9fd55bbbf435 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 11:57:49 compute-0 nova_compute[251290]: 2026-02-02 11:57:49.247 251294 DEBUG nova.compute.resource_tracker [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 11:57:49 compute-0 nova_compute[251290]: 2026-02-02 11:57:49.247 251294 DEBUG oslo_concurrency.lockutils [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/777637141' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.28175 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2240176658' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/976760689' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/715939569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1039388163' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4282876328' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1499515191' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.28199 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1807103297' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3480517797' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3433862765' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1921641374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2984491056' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18312 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:49 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:57:49.365+0000 7f3d02436640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:57:49 compute-0 ceph-mgr[74969]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:57:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb 02 11:57:49 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:57:49 compute-0 nova_compute[251290]: 2026-02-02 11:57:49.458 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 02 11:57:49 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2153040578' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 02 11:57:49 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584830024' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb 02 11:57:49 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 02 11:57:49 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022167032' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 02 11:57:50 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794596089' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:57:50 compute-0 sudo[292849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Feb 02 11:57:50 compute-0 sudo[292849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 11:57:50 compute-0 sudo[292849]: pam_unix(sudo:session): session closed for user root
Feb 02 11:57:50 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27805 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.27754 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.18312 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4190977189' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1339104344' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2153040578' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/255768325' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3584830024' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3022167032' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1891984865' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1855038274' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4044247379' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4196231338' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1794596089' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 02 11:57:50 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3410464482' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28256 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:50 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: 2026-02-02T11:57:50.359+0000 7f3d02436640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:57:50 compute-0 ceph-mgr[74969]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 11:57:50 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18378 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27823 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:50 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:50 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:50 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:50.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:50 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 02 11:57:50 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3181518739' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:57:50 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:50 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18399 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:51 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:51 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:51 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27847 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:51 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:51 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:51.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:51 compute-0 nova_compute[251290]: 2026-02-02 11:57:51.247 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:57:51 compute-0 nova_compute[251290]: 2026-02-02 11:57:51.248 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:57:51 compute-0 nova_compute[251290]: 2026-02-02 11:57:51.248 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:57:51 compute-0 nova_compute[251290]: 2026-02-02 11:57:51.248 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.27805 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3410464482' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.28256 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.18378 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1575583631' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4137220767' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.27823 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3181518739' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3370589844' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3323114288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.18399 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1262063585' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3710330805' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: from='client.27847 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:24:59.847438+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1622016 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:00.847612+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:01.847868+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:02.848053+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:03.848215+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:04.848385+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:05.848580+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:06.848798+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:07.849013+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:08.849168+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:09.849336+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:10.849470+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:11.849613+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:12.849780+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:13.849939+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:14.850091+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:15.850306+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:16.850452+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:17.850618+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:18.850823+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:19.851125+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:20.851307+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:21.851461+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:22.851650+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:23.852044+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1613824 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:24.852193+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:25.852380+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:26.852537+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:27.852729+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:28.853122+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:29.853294+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:30.853468+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:31.853609+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:32.853773+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:33.853930+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:34.854099+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:35.854282+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:36.854443+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:37.854709+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:38.854829+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:39.855035+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:40.855159+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1605632 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:41.855343+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:42.855511+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:43.855668+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:44.855843+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:45.856066+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:46.856236+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:47.856405+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:48.856565+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:49.856775+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:50.856935+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1597440 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:51.857068+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:52.857321+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:53.857473+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:54.857617+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:55.857806+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:56.857950+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:57.858123+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:58.858290+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1589248 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:25:59.858477+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:00.858612+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:01.858899+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:02.859146+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:03.859312+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:04.859490+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:05.859772+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:06.860017+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:07.860214+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:08.860475+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:09.860703+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:10.860812+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1581056 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:11.861046+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:12.861270+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:13.861515+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:14.861667+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:15.861834+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:16.861987+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:17.862235+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:18.862419+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 1572864 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:19.862686+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:20.862865+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:21.863085+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:22.863253+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:23.863418+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:24.863621+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:25.863816+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:26.864008+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:27.864157+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:28.864304+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:29.864490+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:30.864648+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:31.865033+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:32.865204+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:33.865333+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18426 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 02 11:57:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1500822177' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:34.865489+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:35.865770+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:36.865949+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:37.866113+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:38.866250+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:39.866397+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:40.866562+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:41.866701+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:42.866854+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:43.867037+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:44.867170+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:45.867394+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:46.867629+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:47.867822+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:48.868073+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1564672 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:49.868237+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:50.868420+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:51.868600+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:52.868930+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:53.869100+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:54.869295+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:55.869511+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:57.025521+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:58.025698+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:26:59.025883+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:00.026089+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:01.026303+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:02.026531+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:03.026803+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:04.027026+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:05.027202+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:06.027499+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:07.027642+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:08.027832+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:09.027991+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1540096 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:10.028137+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919524 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:11.028270+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:12.028420+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:13.028601+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:14.028799+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 144.730377197s of 144.760253906s, submitted: 3
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:15.028939+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:16.029175+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:17.029345+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:18.029563+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:19.029729+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:20.029905+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:21.030067+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:22.030442+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:23.030565+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:24.030725+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:25.031142+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1523712 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:26.031372+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1515520 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:27.031547+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:28.031801+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1515520 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:29.031982+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1515520 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:30.032162+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1515520 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:31.032429+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 1499136 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:32.032644+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:33.032837+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:34.033112+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:35.033280+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:36.033503+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:37.033674+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:38.033908+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:39.034059+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:40.034312+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:41.034464+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:42.034619+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:43.034794+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:44.034929+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:45.035075+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:46.035248+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:47.035409+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:48.035583+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1490944 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba95547400 session 0x55ba96206960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba95547800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:49.035749+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 1482752 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:50.035886+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 1482752 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:51.036030+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:52.036225+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:53.036491+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:54.036669+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:55.036906+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:56.037184+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:57.037363+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:58.037550+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:27:59.037753+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:00.037920+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:01.038087+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:02.038247+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:03.038394+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:04.038565+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:05.038733+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:06.038954+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:07.039111+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:08.039332+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:09.039470+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:10.039611+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1466368 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:11.039830+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:12.040003+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:13.040218+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:14.040371+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:15.040508+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:16.040669+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:17.040819+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:18.040980+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:19.041099+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:20.041348+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:21.041473+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1449984 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:22.041621+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:23.041709+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:24.041863+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:25.042112+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:26.042419+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:27.042623+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:28.042864+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:29.043073+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:30.043254+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1441792 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:31.043492+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:32.043723+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:33.043991+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:34.044169+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:35.044452+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:36.044692+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:37.044907+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:38.045088+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:39.045258+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:40.045405+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:41.045617+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:42.045791+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:43.045942+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:44.046096+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:45.046310+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:46.046526+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:47.046680+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:48.047408+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:49.047609+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:50.047793+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1425408 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:51.047994+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 1409024 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:52.048236+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 1409024 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:53.048420+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 1409024 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:54.048582+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:55.048840+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:56.049022+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:57.049244+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:58.049396+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:28:59.049579+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1400832 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:00.049759+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: mgrc ms_handle_reset ms_handle_reset con 0x55ba95546800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3082357126
Feb 02 11:57:51 compute-0 ceph-osd[83123]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3082357126,v1:192.168.122.100:6801/3082357126]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: get_auth_request con 0x55ba96fcc000 auth_method 0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: mgrc handle_mgr_configure stats_period=5
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:01.049990+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba96282400 session 0x55ba97769680
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:02.050231+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:03.050465+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:04.050707+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:05.051113+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:06.051449+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:07.051692+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:08.051919+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:09.052086+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:10.052331+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:11.052568+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:12.052843+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:13.053068+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:14.053299+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:15.053540+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:16.053816+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:17.054047+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:18.054300+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:19.054554+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:20.054847+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:21.055110+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:22.055292+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:23.055539+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:24.055799+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:25.056012+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:26.056256+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:27.056422+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:28.056701+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:29.056897+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:30.057072+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:31.057299+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:32.057495+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:33.057702+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:34.057974+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:35.058147+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:36.058330+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:37.058522+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:38.058785+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:39.059008+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:40.059234+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:41.059455+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:42.059648+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:43.059900+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:44.060079+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:45.060239+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:46.060481+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:47.060655+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:48.060863+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:49.061086+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:50.061294+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97136000 session 0x55ba97e4a5a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:51.061471+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:52.061688+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:53.061897+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:54.062121+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:55.062835+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:56.063064+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:57.063277+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:58.063469+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:29:59.063696+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:00.063855+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:01.064011+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:02.064207+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:03.064444+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:04.064656+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:05.064827+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:06.065101+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918342 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:07.065340+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 172.568267822s of 172.578765869s, submitted: 2
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:08.065578+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:09.065828+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:10.066013+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:11.066379+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919854 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:12.066628+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:13.066853+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:14.067107+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:15.067363+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:16.067618+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922878 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:17.067840+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:18.068094+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:19.068297+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:20.068450+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba96f98c00 session 0x55ba98009680
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:21.068603+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922878 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:22.068831+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:23.069021+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:24.069253+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:25.069489+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:26.069816+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922878 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:27.070084+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:28.070259+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:29.070514+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:30.070701+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:31.070906+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922878 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:32.071156+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:33.071400+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:34.071621+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.114982605s of 27.133071899s, submitted: 3
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:35.071830+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:36.072074+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924390 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:37.074243+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:38.074836+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:39.076517+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:40.078712+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1269760 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:41.079698+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923208 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:42.080563+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:43.082003+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:44.084276+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:45.085360+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97137400 session 0x55ba96206780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:46.086413+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923208 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:47.086725+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:48.087437+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:49.088208+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:50.089211+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:51.089839+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6868 writes, 27K keys, 6868 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6868 writes, 1340 syncs, 5.13 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 522 writes, 837 keys, 522 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 522 writes, 250 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba93b6f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923208 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:52.090184+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:53.090817+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:54.091649+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:55.092216+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:56.092722+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923208 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:57.092967+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:58.093324+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.184877396s of 24.231876373s, submitted: 3
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:30:59.093634+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:00.093977+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1245184 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:01.094280+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924720 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:02.094633+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:03.094842+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:04.095043+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:05.095252+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:06.095467+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924720 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:07.095625+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28304 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:08.095854+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:09.096012+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:10.096162+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:11.096425+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:12.096651+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:13.096804+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:14.096981+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:15.097217+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:16.097455+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:17.097650+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:18.097887+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:19.098157+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:20.098331+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 1228800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97137800 session 0x55ba97e4b0e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:21.098479+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:22.098663+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:23.098868+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:24.099199+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:25.099394+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:26.099656+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba97136400 session 0x55ba98026960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:27.099887+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:28.100166+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:29.100384+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:30.100533+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:31.100807+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:32.101015+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:33.101218+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:34.101399+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:35.101641+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:36.101917+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:37.102086+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924129 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.840103149s of 38.850219727s, submitted: 2
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:38.102251+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:39.102492+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:40.102802+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1212416 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:41.104871+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:42.105861+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925641 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:43.107171+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:44.107621+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:45.108014+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:46.108234+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:47.108649+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925050 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:48.109016+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:49.109338+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.065451622s of 12.071977615s, submitted: 2
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:50.109628+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:51.109838+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:52.110138+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924459 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:53.110406+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:54.110631+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 1196032 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:55.110868+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread fragmentation_score=0.000024 took=0.000113s
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:56.111157+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:57.111423+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924459 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:58.111794+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:31:59.112016+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:00.112210+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:01.112334+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:02.112620+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924459 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:03.112737+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:04.113065+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:05.113279+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:06.113572+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:07.113775+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924459 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:08.113969+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:09.114142+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.164722443s of 20.168237686s, submitted: 1
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:10.114291+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:11.114495+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:12.117135+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:13.117360+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1277952 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:14.117575+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 204800 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:15.117803+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 73728 heap: 78520320 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:16.118081+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 2023424 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:17.118335+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:18.118625+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:19.118890+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:20.119159+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:21.119398+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:22.119685+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:23.120009+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:24.120382+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:25.120795+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:26.121020+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:27.121380+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:28.121547+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:29.121866+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:30.122112+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:31.122313+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:32.122518+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:33.122723+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:34.123014+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:35.123350+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:36.123611+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1990656 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:37.123830+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1990656 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:38.124110+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:39.124259+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:40.124698+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:41.125062+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:42.125265+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 ms_handle_reset con 0x55ba96f98c00 session 0x55ba97d430e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:43.125579+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:44.126186+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:45.126717+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:46.127068+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:47.127280+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:48.127563+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:49.127848+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:50.128091+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:51.128364+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:52.128630+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925971 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:53.128845+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:54.129054+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:55.129249+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:56.136028+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.293338776s of 46.293270111s, submitted: 250
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:57.136244+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927483 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:58.136443+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:32:59.136642+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1998848 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:00.136800+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:01.137244+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:02.137413+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928995 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:03.137653+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:04.137840+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:05.138075+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:06.138316+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:07.138577+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928404 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:08.138829+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:09.138993+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:10.139220+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:11.139474+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:12.139688+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928404 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:13.139870+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:14.140066+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:15.140901+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1982464 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:16.141087+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:17.141284+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928404 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:18.141457+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:19.141657+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:20.141822+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:21.141969+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:22.142119+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928404 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:23.142342+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [5])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:24.142539+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:25.142704+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:26.143003+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:27.143295+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928404 data_alloc: 218103808 data_used: 176128
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:28.143557+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.104373932s of 32.233356476s, submitted: 3
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0x164792/0x210000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1974272 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:29.143850+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 1867776 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:30.144069+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _renew_subs
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 143 ms_handle_reset con 0x55ba97137800 session 0x55ba97e30000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 1835008 heap: 80617472 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:31.144239+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9ff000/0x0/0x4ffc00000, data 0x16aae9/0x21a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 18546688 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:32.144397+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc201000/0x0/0x4ffc00000, data 0x96aaf9/0xa1b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 144 ms_handle_reset con 0x55ba97136800 session 0x55ba974eb860
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001895 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 18513920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:33.144568+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 18513920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:34.144705+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 18513920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:35.144797+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc1fd000/0x0/0x4ffc00000, data 0x96cc01/0xa1e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc1fd000/0x0/0x4ffc00000, data 0x96cc01/0xa1e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 18513920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:36.144966+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 18513920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:37.145123+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004653 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:38.145356+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:39.145618+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:40.145860+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 ms_handle_reset con 0x55ba97137000 session 0x55ba973a54a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fa000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:41.146096+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:42.146347+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004653 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:43.146594+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fa000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:44.146761+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:45.146917+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:46.147124+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:47.147379+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004653 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:48.147642+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:49.147867+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fa000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:50.149129+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:51.149869+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:52.150507+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004653 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:53.151700+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fa000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:54.152362+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.817378998s of 26.077272415s, submitted: 73
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:55.152636+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:56.153489+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:57.153950+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006165 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:58.154383+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fa000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:33:59.155029+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:00.155194+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:01.155415+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:02.155674+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004143 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:03.155940+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:04.156416+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:05.156932+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:06.157155+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:07.157430+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004143 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:08.157716+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:09.158046+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:10.158195+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:11.158666+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:12.159107+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004143 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:13.159369+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:14.159627+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:15.159773+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:16.160041+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:17.160226+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004143 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:18.160434+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 17465344 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.596820831s of 24.609909058s, submitted: 3
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 ms_handle_reset con 0x55ba96283000 session 0x55ba972061e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:19.160668+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 17440768 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 ms_handle_reset con 0x55ba96283000 session 0x55ba978ae3c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0x96ebd3/0xa21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:20.160836+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 17424384 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:21.161033+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 17416192 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _renew_subs
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba96f98c00 session 0x55ba971c50e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba97136800 session 0x55ba986d4780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba97137000 session 0x55ba986d4960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba97137800 session 0x55ba986d4b40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba96283000 session 0x55ba986d4d20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:22.161221+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 15163392 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064523 data_alloc: 218103808 data_used: 184320
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fbb50000/0x0/0x4ffc00000, data 0x1014e12/0x10ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:23.161394+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 15163392 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:24.161809+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 15163392 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:25.161948+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 15163392 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 ms_handle_reset con 0x55ba96f98c00 session 0x55ba986d4f00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fbb50000/0x0/0x4ffc00000, data 0x1014e12/0x10ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:26.162118+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 15384576 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fbb50000/0x0/0x4ffc00000, data 0x1014e12/0x10ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:27.163035+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 15384576 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083779 data_alloc: 218103808 data_used: 2904064
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:28.163401+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 11911168 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:29.164316+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 9338880 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:30.164478+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 9338880 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fbb2e000/0x0/0x4ffc00000, data 0x1038e12/0x10ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:31.164666+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 9297920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _renew_subs
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.401331902s of 12.604059219s, submitted: 31
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:32.165368+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 9297920 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbb2a000/0x0/0x4ffc00000, data 0x103ade4/0x10f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114905 data_alloc: 218103808 data_used: 6942720
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:33.165930+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9265152 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:34.166344+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9265152 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:35.166530+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9265152 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:36.166995+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9265152 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:37.167460+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9265152 heap: 97402880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184561 data_alloc: 218103808 data_used: 6963200
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:38.167636+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbb2a000/0x0/0x4ffc00000, data 0x103ade4/0x10f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 6922240 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:39.168014+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91561984 unmapped: 6897664 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:40.168358+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 6176768 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:41.168660+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92291072 unmapped: 6168576 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:42.169198+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92291072 unmapped: 6168576 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202095 data_alloc: 218103808 data_used: 7163904
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:43.169409+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92291072 unmapped: 6168576 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb0aa000/0x0/0x4ffc00000, data 0x1abbde4/0x1b72000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:44.169660+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92291072 unmapped: 6168576 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:45.169887+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.370252609s of 13.598746300s, submitted: 78
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb089000/0x0/0x4ffc00000, data 0x1adcde4/0x1b93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:46.170302+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb089000/0x0/0x4ffc00000, data 0x1adcde4/0x1b93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:47.170562+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198767 data_alloc: 218103808 data_used: 7163904
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:48.170798+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:49.171589+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:50.171778+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb089000/0x0/0x4ffc00000, data 0x1adcde4/0x1b93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:51.172022+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:52.172183+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199039 data_alloc: 218103808 data_used: 7163904
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:53.172439+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:54.172682+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91815936 unmapped: 6643712 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:55.172865+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9802e1e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97e5f0e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261c00 session 0x55ba9772c000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91799552 unmapped: 6660096 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.791528702s of 10.809786797s, submitted: 4
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba98100000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:56.173116+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb080000/0x0/0x4ffc00000, data 0x1ae5de4/0x1b9c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba98008780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91488256 unmapped: 6971392 heap: 98459648 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96283000 session 0x55ba97e31a40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:57.173378+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96f98c00 session 0x55ba97342000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97139800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97139800 session 0x55ba97527e00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97e4af00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 9969664 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9776c000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96283000 session 0x55ba94968780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:58.173534+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243195 data_alloc: 218103808 data_used: 7168000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 9986048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:34:59.173702+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96f98c00 session 0x55ba978acb40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 9986048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:00.173831+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fabb3000/0x0/0x4ffc00000, data 0x1fb2de4/0x2069000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 9986048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97139400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97139400 session 0x55ba97e5fc20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:01.174176+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 9986048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fabb3000/0x0/0x4ffc00000, data 0x1fb2de4/0x2069000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97139400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97139400 session 0x55ba97343860
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba973a41e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:02.174360+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96283000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 9641984 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:03.174608+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248541 data_alloc: 218103808 data_used: 7299072
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 4980736 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:04.174850+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96747520 unmapped: 4866048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:05.175079+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96747520 unmapped: 4866048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:06.175288+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96747520 unmapped: 4866048 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:07.175505+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fab8e000/0x0/0x4ffc00000, data 0x1fd6df4/0x208e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 4833280 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:08.175681+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280005 data_alloc: 234881024 data_used: 12001280
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 4833280 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:09.175882+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 4833280 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.933655739s of 14.016370773s, submitted: 18
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:10.176034+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba9772de00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 4808704 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:11.176211+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96862208 unmapped: 4751360 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:12.176372+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96862208 unmapped: 4751360 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:13.176503+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280261 data_alloc: 234881024 data_used: 12001280
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1fd9df4/0x2091000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 4702208 heap: 101613568 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:14.176676+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 8257536 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:15.176846+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 7823360 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:16.177019+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f8c40000/0x0/0x4ffc00000, data 0x2974df4/0x2a2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 7553024 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:17.177126+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 7544832 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:18.177315+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370149 data_alloc: 234881024 data_used: 12378112
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101130240 unmapped: 7536640 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:19.177468+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101130240 unmapped: 7536640 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29fbdf4/0x2ab3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:20.177612+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101130240 unmapped: 7536640 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29fbdf4/0x2ab3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:21.177822+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.416647911s of 11.216951370s, submitted: 81
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 8404992 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f8b9a000/0x0/0x4ffc00000, data 0x2a1adf4/0x2ad2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:22.177961+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 8404992 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f8b97000/0x0/0x4ffc00000, data 0x2a1ddf4/0x2ad5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:23.178180+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1366021 data_alloc: 234881024 data_used: 12378112
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 8404992 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:24.178324+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 8404992 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:25.178466+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 8396800 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba94bd2b40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96283000 session 0x55ba9732ef00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:26.178669+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 11558912 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97ea8f00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:27.178823+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:28.178991+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214118 data_alloc: 218103808 data_used: 7176192
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac7000/0x0/0x4ffc00000, data 0x1aeede4/0x1ba5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac2000/0x0/0x4ffc00000, data 0x1af3de4/0x1baa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:29.179147+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:30.179300+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:31.179468+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:32.179600+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136800 session 0x55ba986d50e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137000 session 0x55ba98008d20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 11935744 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.436764717s of 11.524084091s, submitted: 27
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac2000/0x0/0x4ffc00000, data 0x1af3de4/0x1baa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [1])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:33.179812+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba9776ad20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039752 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:34.180079+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:35.180307+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:36.180589+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:37.180853+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:38.181116+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039752 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:39.181379+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:40.181551+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:41.181846+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:42.182101+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:43.182286+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039752 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:44.182478+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:45.182632+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:46.182887+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:47.183097+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:48.183325+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039752 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:49.183638+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:50.183907+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:51.184115+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:52.184391+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:53.184582+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039752 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:54.184844+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:55.185034+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:56.185235+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 16244736 heap: 108666880 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:57.185395+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97139400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.409597397s of 24.497409821s, submitted: 20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97139400 session 0x55ba971c41e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba96ff94a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba978ae3c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136800 session 0x55ba94bd21e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137000 session 0x55ba973a54a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:58.185614+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100641 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:35:59.185800+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:00.186652+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:01.186878+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:02.187402+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:03.188055+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100641 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97139400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:04.188539+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97139400 session 0x55ba94af10e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 24158208 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba978aed20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:05.188752+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 24117248 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:06.188941+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:07.189102+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:08.189287+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148805 data_alloc: 218103808 data_used: 7311360
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:09.191552+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:10.191859+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:11.192190+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:12.192389+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:13.192577+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148805 data_alloc: 218103808 data_used: 7311360
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:14.192839+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:15.193160+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa445000/0x0/0x4ffc00000, data 0x1170de4/0x1227000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 21364736 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:16.193382+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.047296524s of 19.149776459s, submitted: 23
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 17539072 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:17.193626+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97124352 unmapped: 18890752 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:18.193873+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228275 data_alloc: 218103808 data_used: 7340032
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ae0000/0x0/0x4ffc00000, data 0x1ad4de4/0x1b8b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:19.194196+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:20.194375+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:21.194826+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ae0000/0x0/0x4ffc00000, data 0x1ad4de4/0x1b8b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:22.194988+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:23.195209+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229787 data_alloc: 218103808 data_used: 7340032
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:24.195451+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:25.195658+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:26.195884+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:27.196077+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9adf000/0x0/0x4ffc00000, data 0x1ad6de4/0x1b8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 17547264 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba95f174a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:28.196315+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227747 data_alloc: 218103808 data_used: 7340032
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 17539072 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:29.196516+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.821294785s of 13.115832329s, submitted: 75
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 17539072 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:30.196698+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 17539072 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:31.196896+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:32.197085+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:33.197302+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ade000/0x0/0x4ffc00000, data 0x1ad7de4/0x1b8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227971 data_alloc: 218103808 data_used: 7340032
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:34.197652+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:35.197924+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ade000/0x0/0x4ffc00000, data 0x1ad7de4/0x1b8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:36.198399+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:37.198589+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:38.198759+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227651 data_alloc: 218103808 data_used: 7340032
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 17530880 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:39.198953+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.165346146s of 10.174480438s, submitted: 2
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba962072c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba973430e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 17522688 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97810c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:40.199132+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97810c00 session 0x55ba97206d20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:41.199283+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:42.199459+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:43.199677+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:44.199952+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:45.200237+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:46.200581+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:47.200764+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:48.201030+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:49.201205+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:50.201435+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:51.201591+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:52.201865+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:53.202082+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:54.202239+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:55.202392+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:56.202646+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:57.202829+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:58.203033+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:36:59.203193+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:00.203415+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:01.203609+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:02.203820+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:03.203968+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:04.204211+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:05.204380+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:06.204594+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:07.204814+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:08.205211+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052897 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:09.205398+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 21856256 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:10.205924+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.322380066s of 30.489206314s, submitted: 35
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba971c5a40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba971c4b40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 21839872 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:11.206107+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba94af0780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba96ff81e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97810800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97810800 session 0x55ba96ff8d20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97a2b0e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97a2a1e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 21757952 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:12.206395+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 21757952 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:13.206540+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa8a0000/0x0/0x4ffc00000, data 0xd14e46/0xdcc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083328 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa8a0000/0x0/0x4ffc00000, data 0xd14e46/0xdcc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:14.206836+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 21757952 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:15.207008+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 21757952 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:16.207217+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 21757952 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba986d4000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:17.207374+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 21602304 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:18.207555+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 21585920 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100276 data_alloc: 218103808 data_used: 2387968
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:19.207816+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:20.208021+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:21.208221+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:22.208405+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:23.208569+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111068 data_alloc: 218103808 data_used: 4001792
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:24.208773+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:25.209125+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:26.209438+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:27.209617+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:28.209900+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 20873216 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.837131500s of 18.049880981s, submitted: 30
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd38e46/0xdf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [1])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143858 data_alloc: 218103808 data_used: 4759552
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:29.210090+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97124352 unmapped: 18890752 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:30.210292+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 18022400 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:31.210544+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 18022400 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:32.210813+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98025472 unmapped: 17989632 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:33.210959+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98025472 unmapped: 17989632 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162424 data_alloc: 218103808 data_used: 4984832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:34.211285+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98025472 unmapped: 17989632 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:35.211526+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98025472 unmapped: 17989632 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:36.211843+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98025472 unmapped: 17989632 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:37.212066+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 17956864 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:38.212290+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163336 data_alloc: 218103808 data_used: 5054464
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:39.212524+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:40.212795+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:41.212971+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:42.213145+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:43.213331+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163336 data_alloc: 218103808 data_used: 5054464
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:44.213515+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:45.213789+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:46.213987+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:47.214258+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x10a1e46/0x1159000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:48.214498+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 17940480 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163336 data_alloc: 218103808 data_used: 5054464
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:49.214679+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336400 session 0x55ba97baed20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba97500960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97500b40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97bae960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 17932288 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.843761444s of 21.162544250s, submitted: 64
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba97148780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336400 session 0x55ba97bae5a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97136000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97136000 session 0x55ba97526f00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba9776c780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba978adc20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:50.214928+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:51.215112+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:52.215278+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:53.215472+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d4000/0x0/0x4ffc00000, data 0x13e0e46/0x1498000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188922 data_alloc: 218103808 data_used: 5058560
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:54.215725+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:55.215963+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:56.216246+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:57.216467+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d4000/0x0/0x4ffc00000, data 0x13e0e46/0x1498000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba978ac000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:58.216627+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189887 data_alloc: 218103808 data_used: 5058560
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 19259392 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:37:59.552318+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.809700966s of 10.887388229s, submitted: 14
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 18735104 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:00.552639+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98459648 unmapped: 17555456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:01.552845+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98459648 unmapped: 17555456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:02.553085+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d1000/0x0/0x4ffc00000, data 0x13e1e69/0x149a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98459648 unmapped: 17555456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:03.553241+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204631 data_alloc: 218103808 data_used: 7233536
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98459648 unmapped: 17555456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:04.553382+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98459648 unmapped: 17555456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:05.553550+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d1000/0x0/0x4ffc00000, data 0x13e1e69/0x149a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 17522688 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:06.553898+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 17514496 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:07.554021+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 17514496 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:08.554153+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204631 data_alloc: 218103808 data_used: 7233536
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 17514496 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:09.554268+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d1000/0x0/0x4ffc00000, data 0x13e1e69/0x149a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.951482773s of 10.088466644s, submitted: 23
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101531648 unmapped: 14483456 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:10.554391+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101834752 unmapped: 14180352 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:11.554533+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:12.554711+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:13.554889+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247231 data_alloc: 218103808 data_used: 7974912
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:14.555044+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ed1000/0x0/0x4ffc00000, data 0x16e2e69/0x179b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:15.555192+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:16.555391+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 15851520 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:17.555532+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 15826944 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:18.555733+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247247 data_alloc: 218103808 data_used: 7974912
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 15826944 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:19.555919+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336400 session 0x55ba9776d0e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba974cb680
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ed1000/0x0/0x4ffc00000, data 0x16e2e69/0x179b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 15826944 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:20.556065+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.881438255s of 10.082806587s, submitted: 18
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba978af860
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 18006016 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:21.556191+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 18006016 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:22.556627+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa511000/0x0/0x4ffc00000, data 0x10a2e46/0x115a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 18006016 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:23.556797+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167145 data_alloc: 218103808 data_used: 5058560
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa511000/0x0/0x4ffc00000, data 0x10a2e46/0x115a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:24.557040+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 18006016 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba975010e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba94bd4780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:25.557221+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98017280 unmapped: 17997824 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba978abc20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:26.557461+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:27.557780+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:28.557940+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070223 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:29.558145+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:30.558301+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:31.558480+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:32.558641+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:33.558864+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070223 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:34.559124+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:35.559289+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:36.559494+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:37.559634+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:38.559840+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070223 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:39.560037+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:40.560167+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:41.560319+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:42.560479+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:43.560696+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 20922368 heap: 116015104 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.462673187s of 23.645784378s, submitted: 51
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145739 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9776a960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba972d6d20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba971c54a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba95149860
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:44.560807+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba974cb4a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d9000/0x0/0x4ffc00000, data 0x13dcde4/0x1493000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:45.560982+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:46.561448+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:47.561589+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:48.561757+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba97e4be00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba97d7a1e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145739 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:49.561898+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 24764416 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba971494a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba95e40d20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:50.562089+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 24715264 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d8000/0x0/0x4ffc00000, data 0x13dce07/0x1494000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d8000/0x0/0x4ffc00000, data 0x13dce07/0x1494000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:51.562315+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 24715264 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:52.562467+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:53.562635+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221661 data_alloc: 218103808 data_used: 7557120
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:54.562820+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:55.562969+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:56.563207+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d8000/0x0/0x4ffc00000, data 0x13dce07/0x1494000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:57.563366+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:58.563503+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221661 data_alloc: 218103808 data_used: 7557120
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:38:59.563646+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:00.563794+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 22757376 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:01.563936+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 22749184 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa1d8000/0x0/0x4ffc00000, data 0x13dce07/0x1494000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.684358597s of 17.943258286s, submitted: 19
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:02.564106+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101908480 unmapped: 17784832 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:03.564249+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0e000/0x0/0x4ffc00000, data 0x1a9ee07/0x1b56000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101285888 unmapped: 18407424 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275807 data_alloc: 218103808 data_used: 7553024
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:04.564424+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101285888 unmapped: 18407424 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:05.564966+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101285888 unmapped: 18407424 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:06.565228+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:07.565386+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:08.565539+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275823 data_alloc: 218103808 data_used: 7553024
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:09.565700+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:10.565842+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:11.565985+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 18374656 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:12.566137+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:13.566300+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275823 data_alloc: 218103808 data_used: 7553024
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:14.566535+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:15.566882+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:16.567335+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:17.567537+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:18.567714+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275823 data_alloc: 218103808 data_used: 7553024
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:19.567948+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:20.568138+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:21.568371+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:22.568565+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:23.568707+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275823 data_alloc: 218103808 data_used: 7553024
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:24.568932+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101367808 unmapped: 18325504 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:25.569128+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:26.569322+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:27.569504+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:28.569675+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x1aa7e07/0x1b5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275823 data_alloc: 218103808 data_used: 7553024
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:29.569857+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:30.570047+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 18317312 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:31.570232+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba9802f0e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336400 session 0x55ba95e40960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba95e405a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba972d65a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 18333696 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.683389664s of 29.841543198s, submitted: 44
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba97769c20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba977685a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336800 session 0x55ba981b3c20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba97206f00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba977692c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:32.570385+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 18104320 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9677000/0x0/0x4ffc00000, data 0x1f3de07/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:33.570564+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 18104320 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311921 data_alloc: 218103808 data_used: 7553024
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:34.570710+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101597184 unmapped: 18096128 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:35.570902+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101597184 unmapped: 18096128 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba972072c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:36.571059+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 18087936 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:37.571262+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 18063360 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:38.571427+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9675000/0x0/0x4ffc00000, data 0x1f3ee07/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336849 data_alloc: 234881024 data_used: 11227136
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:39.571553+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:40.571731+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:41.571939+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:42.572068+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:43.572223+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:44.572400+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336849 data_alloc: 234881024 data_used: 11227136
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9675000/0x0/0x4ffc00000, data 0x1f3ee07/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:45.572553+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9675000/0x0/0x4ffc00000, data 0x1f3ee07/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:46.572786+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 15392768 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:47.572992+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.598789215s of 15.770350456s, submitted: 11
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 14286848 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:48.573119+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 14286848 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:49.573564+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356043 data_alloc: 234881024 data_used: 11460608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105553920 unmapped: 14139392 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:50.573729+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93d7000/0x0/0x4ffc00000, data 0x21dce07/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:51.573946+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:52.574148+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:53.574304+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:54.574650+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359763 data_alloc: 234881024 data_used: 11460608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:55.574843+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93d7000/0x0/0x4ffc00000, data 0x21dce07/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93d7000/0x0/0x4ffc00000, data 0x21dce07/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:56.575033+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:57.575185+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93d7000/0x0/0x4ffc00000, data 0x21dce07/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 14106624 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:58.575359+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 14098432 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:39:59.575507+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359763 data_alloc: 234881024 data_used: 11460608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 14098432 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:00.575655+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba97a2a960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba97d7a960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 14098432 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.472229958s of 13.571630478s, submitted: 32
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93d7000/0x0/0x4ffc00000, data 0x21dce07/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:01.575837+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba974ebc20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:02.575990+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:03.576133+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:04.576293+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279871 data_alloc: 218103808 data_used: 7553024
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:05.576430+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b0c000/0x0/0x4ffc00000, data 0x1aa8e07/0x1b60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:06.576604+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:07.576723+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:08.576926+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:09.577079+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279871 data_alloc: 218103808 data_used: 7553024
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 14737408 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97e4ad20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba972d70e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:10.577219+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba978acf00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:11.577340+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:12.577537+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:13.577845+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:14.578027+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085932 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:15.578176+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:16.578373+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:17.578639+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:18.579387+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:19.579722+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085932 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:20.580179+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:21.580480+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:22.580668+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:23.581368+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:24.581802+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085932 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:25.582543+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:26.583280+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:27.584245+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:28.584807+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:29.585523+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085932 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:30.585704+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac40000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:31.586294+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:32.586805+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:33.587214+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:34.587624+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085932 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:35.587990+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 19668992 heap: 119693312 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.002712250s of 35.214824677s, submitted: 54
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:36.588276+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97207680
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba977690e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba978ac960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97500780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba971c54a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa4e3000/0x0/0x4ffc00000, data 0x10d1e0d/0x1189000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 23273472 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:37.588721+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 23273472 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:38.589224+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 23273472 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:39.589431+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147747 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 23273472 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:40.589784+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 23429120 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba9776da40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:41.590091+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 23429120 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:42.590439+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa4be000/0x0/0x4ffc00000, data 0x10f5e69/0x11ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 20365312 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:43.590712+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 20365312 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:44.590984+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201852 data_alloc: 218103808 data_used: 7708672
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba974caf00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba9776be00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 20365312 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:45.591107+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba9776a5a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:46.591434+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:47.591608+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac3f000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:48.591803+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac3f000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:49.592027+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093962 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:50.592228+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:51.592443+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8880 writes, 34K keys, 8880 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 8880 writes, 2196 syncs, 4.04 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2012 writes, 6756 keys, 2012 commit groups, 1.0 writes per commit group, ingest: 6.16 MB, 0.01 MB/s
                                           Interval WAL: 2012 writes, 856 syncs, 2.35 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:52.592587+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:53.593037+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:54.593350+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac3f000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093962 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:55.593595+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:56.593901+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:57.594092+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:58.594250+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac3f000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:40:59.594435+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093962 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:00.594576+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 24322048 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:01.594719+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97206780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba94af0780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba974ea000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac3f000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba986d5e00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.948949814s of 25.311471939s, submitted: 68
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba9496cf00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba978aed20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba978af0e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba978afa40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba951485a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 24141824 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:02.594943+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 24141824 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:03.595125+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 24141824 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:04.595288+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140515 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 24141824 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:05.595438+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa77d000/0x0/0x4ffc00000, data 0xe37e46/0xeef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97e4a780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 24141824 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:06.595617+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97e4bc20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96337c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96337c00 session 0x55ba974ebe00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba974cad20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 23830528 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:07.595763+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 23822336 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:08.595900+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 23822336 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:09.596072+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170554 data_alloc: 218103808 data_used: 3780608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa758000/0x0/0x4ffc00000, data 0xe5be56/0xf14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba974cb4a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9776b2c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 23822336 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:10.596298+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba9802f4a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa8f7000/0x0/0x4ffc00000, data 0x998df4/0xa50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:11.596470+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:12.596620+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:13.596777+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:14.596940+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102213 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:15.597113+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:16.597283+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:17.597418+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:18.597584+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:19.597844+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102213 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:20.598054+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:21.598224+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:22.598439+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:23.598739+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:24.599409+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102213 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:25.599775+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:26.600287+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:27.600867+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:28.601403+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:29.601942+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102213 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:30.602246+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:31.602508+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:32.602710+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:33.602873+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:34.603268+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102213 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 24412160 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:35.603541+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 24403968 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:36.603938+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa91c000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 24403968 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:37.604093+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 24403968 heap: 123895808 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96336000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96336000 session 0x55ba97a2a1e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:38.604244+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba9732e960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba9732fa40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9802fc20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.645248413s of 36.975486755s, submitted: 84
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba9732f680
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba9496cd20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97137400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97137400 session 0x55ba97baf680
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba978aa000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba974cb2c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 33751040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:39.604568+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188891 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 33751040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:40.604872+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 33751040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:41.605189+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa02a000/0x0/0x4ffc00000, data 0x158adf4/0x1642000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 33751040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:42.605450+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba974ea1e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa02a000/0x0/0x4ffc00000, data 0x158adf4/0x1642000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba978ae1e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 33751040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:43.605680+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba98009860
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba955ef0e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98869248 unmapped: 33423360 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:44.605907+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194850 data_alloc: 218103808 data_used: 200704
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 33415168 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:45.606028+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:46.606272+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 33415168 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:47.606438+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105455616 unmapped: 26836992 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:48.606624+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105455616 unmapped: 26836992 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:49.606808+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105455616 unmapped: 26836992 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280274 data_alloc: 234881024 data_used: 12877824
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:50.607014+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105455616 unmapped: 26836992 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:51.607162+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 26804224 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:52.607303+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 26796032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:53.607472+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 26796032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:54.607677+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 26796032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280274 data_alloc: 234881024 data_used: 12877824
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:55.607860+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 26796032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x15aee04/0x1667000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:56.608108+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 26796032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.110782623s of 18.214258194s, submitted: 17
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:57.608275+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 18964480 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:58.608487+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 19275776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:41:59.608791+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 19275776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379460 data_alloc: 234881024 data_used: 13168640
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:00.608959+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 19275776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:01.609187+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 19275776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:02.609328+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 19251200 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:03.609643+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 19251200 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:04.609817+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379460 data_alloc: 234881024 data_used: 13168640
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:05.609972+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 19202048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:06.610155+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:07.610287+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:08.610409+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:09.610549+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f93cd000/0x0/0x4ffc00000, data 0x21dee04/0x2297000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379460 data_alloc: 234881024 data_used: 13168640
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:10.610716+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97e314a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.625873566s of 13.859022141s, submitted: 76
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97baf0e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:11.610854+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 19136512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1000 session 0x55ba978af860
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:12.611005+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 26509312 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:13.611131+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 26509312 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:14.611271+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 26484736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115853 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:15.611429+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 27901952 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:16.611628+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 27656192 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:17.611803+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104808448 unmapped: 27484160 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:18.611964+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:19.612128+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115853 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:20.612297+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:21.612439+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:22.612601+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:23.612793+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:24.612945+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115853 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:25.613098+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:26.613281+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:27.613437+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104816640 unmapped: 27475968 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:28.613602+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104824832 unmapped: 27467776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:29.614006+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fac41000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104824832 unmapped: 27467776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115853 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:30.614176+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104824832 unmapped: 27467776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.093173981s of 20.459842682s, submitted: 278
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba98101a40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97e4a000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97e4a960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:31.614327+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba97e4ab40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba955ee3c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:32.614867+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa53b000/0x0/0x4ffc00000, data 0x107ade4/0x1131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:33.615356+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa53b000/0x0/0x4ffc00000, data 0x107ade4/0x1131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa53b000/0x0/0x4ffc00000, data 0x107ade4/0x1131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:34.615796+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173818 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:35.616037+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa53b000/0x0/0x4ffc00000, data 0x107ade4/0x1131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:36.616287+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 27607040 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:37.616924+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 27598848 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:38.617188+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 27598848 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa53b000/0x0/0x4ffc00000, data 0x107ade4/0x1131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:39.617394+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba971c45a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 27598848 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173818 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b000 session 0x55ba955ee1e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:40.617632+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 27598848 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba974eb4a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927599907s of 10.040019989s, submitted: 21
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba978ad4a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:41.617833+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104857600 unmapped: 27435008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:42.617990+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 27328512 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:43.618178+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:44.619006+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa516000/0x0/0x4ffc00000, data 0x109edf2/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225820 data_alloc: 218103808 data_used: 7274496
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:45.619290+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:46.619568+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:47.619766+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa516000/0x0/0x4ffc00000, data 0x109edf2/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:48.619913+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba95547800 session 0x55ba97ea90e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba9551b000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:49.620140+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225820 data_alloc: 218103808 data_used: 7274496
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:50.620328+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:51.620550+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 25460736 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:52.620785+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.263588905s of 11.275200844s, submitted: 2
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111493120 unmapped: 20799488 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9d37000/0x0/0x4ffc00000, data 0x186fdf2/0x1927000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:53.620939+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114728960 unmapped: 17563648 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c9a000/0x0/0x4ffc00000, data 0x191adf2/0x19d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97149c20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97bae3c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba94af0b40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1c00 session 0x55ba978afa40
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96f98800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:54.621108+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96f98800 session 0x55ba95e410e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba97bae000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba9776b4a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba97500960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1c00 session 0x55ba97baed20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329233 data_alloc: 218103808 data_used: 7700480
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:55.621307+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:56.621560+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:57.621733+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:58.621932+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 19243008 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a7f000/0x0/0x4ffc00000, data 0x1b33e64/0x1bed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:42:59.622070+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 20930560 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a5e000/0x0/0x4ffc00000, data 0x1b54e64/0x1c0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328993 data_alloc: 218103808 data_used: 7700480
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:00.622275+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111370240 unmapped: 20922368 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:01.622434+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111370240 unmapped: 20922368 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:02.622607+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97138c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97138c00 session 0x55ba97baf860
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111378432 unmapped: 20914176 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96261800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:03.622795+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111394816 unmapped: 20897792 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:04.623088+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111394816 unmapped: 20897792 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.115145683s of 12.528117180s, submitted: 147
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333249 data_alloc: 218103808 data_used: 8241152
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:05.623244+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a5e000/0x0/0x4ffc00000, data 0x1b54e64/0x1c0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 20742144 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:06.623522+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a53000/0x0/0x4ffc00000, data 0x1b5fe64/0x1c19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 20742144 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:07.623659+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 20742144 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:08.623907+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111558656 unmapped: 20733952 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:09.624074+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111558656 unmapped: 20733952 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336441 data_alloc: 218103808 data_used: 8765440
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:10.624303+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 20725760 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a53000/0x0/0x4ffc00000, data 0x1b5fe64/0x1c19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:11.624490+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 20725760 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a53000/0x0/0x4ffc00000, data 0x1b5fe64/0x1c19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:12.624677+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 20725760 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:13.624927+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 20725760 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:14.625079+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 20725760 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.119070053s of 10.132285118s, submitted: 5
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396157 data_alloc: 218103808 data_used: 8814592
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:15.625217+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 18251776 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9219000/0x0/0x4ffc00000, data 0x2399e64/0x2453000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:16.625400+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 18210816 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f920c000/0x0/0x4ffc00000, data 0x23a5e64/0x245f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:17.625576+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:18.625785+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f920c000/0x0/0x4ffc00000, data 0x23a5e64/0x245f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:19.625944+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407199 data_alloc: 218103808 data_used: 9080832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:20.626118+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:21.626289+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:22.626438+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f920c000/0x0/0x4ffc00000, data 0x23a5e64/0x245f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:23.626601+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 18202624 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:24.626824+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 18194432 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405991 data_alloc: 218103808 data_used: 9084928
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:25.626944+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x23a5e64/0x245f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 18186240 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:26.627226+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:27.627387+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:28.627559+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x23a5e64/0x245f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:29.627706+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:30.627908+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405991 data_alloc: 218103808 data_used: 9084928
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:31.628058+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.411430359s of 16.657747269s, submitted: 73
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:32.628257+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 18178048 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9207000/0x0/0x4ffc00000, data 0x23abe64/0x2465000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:33.628410+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba974cb680
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96261800 session 0x55ba97500d20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 18169856 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:34.628552+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba9496c3c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 19652608 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:35.628765+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314411 data_alloc: 218103808 data_used: 7700480
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 19652608 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:36.628930+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 19652608 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:37.629098+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 18604032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c62000/0x0/0x4ffc00000, data 0x1952df2/0x1a0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:38.629235+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 18604032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:39.629379+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 18604032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:40.629565+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314511 data_alloc: 218103808 data_used: 7700480
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 18604032 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:41.629709+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 18595840 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.349160194s of 10.561897278s, submitted: 33
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba95f170e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1800 session 0x55ba974ebe00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:42.630500+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c62000/0x0/0x4ffc00000, data 0x1952df2/0x1a0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba974eb0e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:43.630645+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:44.630810+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:45.630953+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:46.631191+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:47.631338+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:48.631479+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:49.631723+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:50.631872+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:51.631990+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:52.632165+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:53.632338+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:54.632484+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:55.632634+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:56.632876+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:57.633033+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:58.633194+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:43:59.633330+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 25042944 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: mgrc ms_handle_reset ms_handle_reset con 0x55ba96fcc000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3082357126
Feb 02 11:57:51 compute-0 ceph-osd[83123]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3082357126,v1:192.168.122.100:6801/3082357126]
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: get_auth_request con 0x55ba96f98800 auth_method 0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: mgrc handle_mgr_configure stats_period=5
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:00.633503+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97138000 session 0x55ba94bd3680
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba95547000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260400 session 0x55ba974eaf00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97138000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:01.633659+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:02.633816+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:03.633977+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:04.634159+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:05.634331+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:06.634525+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:07.634649+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:08.634986+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:09.635197+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:10.635483+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140132 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 25411584 heap: 132292608 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:11.635664+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.530868530s of 29.659109116s, submitted: 37
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 21291008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba97501c20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba97d42780
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1c00 session 0x55ba9496dc20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97138400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97138400 session 0x55ba974ebc20
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97138400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97138400 session 0x55ba97e4a3c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:12.635936+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e85000/0x0/0x4ffc00000, data 0x131fe46/0x13d7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 28352512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:13.636142+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 28352512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:14.636408+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 28352512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:15.636632+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217867 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 28352512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba962074a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:16.636907+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba9496cf00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 28352512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:17.637123+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1400 session 0x55ba95149680
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1c00
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a1c00 session 0x55ba981014a0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba96260800
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba97138400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 28672000 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e61000/0x0/0x4ffc00000, data 0x1343e46/0x13fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:18.637378+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 27271168 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:19.637618+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e61000/0x0/0x4ffc00000, data 0x1343e46/0x13fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:20.637867+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285791 data_alloc: 234881024 data_used: 10125312
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:21.638053+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:22.638309+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e61000/0x0/0x4ffc00000, data 0x1343e46/0x13fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:23.638602+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:24.638802+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:25.638978+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285791 data_alloc: 234881024 data_used: 10125312
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:26.639246+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:27.639416+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 25493504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e61000/0x0/0x4ffc00000, data 0x1343e46/0x13fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:28.639585+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.617950439s of 16.726493835s, submitted: 35
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 20938752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:29.639735+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 18317312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:30.640002+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351657 data_alloc: 234881024 data_used: 10366976
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 18317312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:31.640236+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 18317312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:32.640455+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 18317312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:33.640640+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1acee46/0x1b86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 18317312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:34.640862+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 18636800 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:35.641068+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345209 data_alloc: 234881024 data_used: 10371072
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 18636800 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:36.641310+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f96b5000/0x0/0x4ffc00000, data 0x1aefe46/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 18636800 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:37.641445+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f96b5000/0x0/0x4ffc00000, data 0x1aefe46/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f96b5000/0x0/0x4ffc00000, data 0x1aefe46/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 18636800 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:38.641685+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 18628608 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.372352600s of 10.616518974s, submitted: 90
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba96260800 session 0x55ba978ae1e0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba97138400 session 0x55ba9772c000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:39.641820+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a0000
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba972a0000 session 0x55ba9732e960
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:40.641990+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:41.642239+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:42.642399+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:43.642551+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:44.642799+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:45.642942+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:46.643142+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:47.643318+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:48.643484+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:49.643716+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:50.643967+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:51.644129+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:52.644312+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:53.644490+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:54.644663+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 25419776 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:55.644843+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:56.645187+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:57.645363+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:58.645559+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:44:59.645727+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:00.645930+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 25411584 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:01.646207+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:02.646373+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:03.646552+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:04.646976+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:05.647161+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:06.647419+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:07.647656+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 25403392 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:08.647925+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 25395200 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:09.648089+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 25395200 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:10.648285+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 25395200 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:11.648451+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:12.648629+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:13.648849+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:14.648991+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:15.649315+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:16.649481+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:17.649613+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:18.649794+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:19.650047+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:20.650288+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:21.650497+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:22.650659+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:23.650844+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 25387008 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:24.650980+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:25.651174+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:26.651475+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:27.651654+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:28.651902+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:29.652089+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:30.652239+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:31.652413+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:32.652621+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:33.652888+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:34.653131+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:35.653364+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 25370624 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:36.653631+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:37.653848+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:38.654104+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:39.654281+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:40.654456+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:41.654775+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:42.655123+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:43.655489+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:44.655856+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:45.656154+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:46.656472+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:47.656786+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:48.657046+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 25362432 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:49.657384+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:50.657599+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:51.657831+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:52.658003+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:53.658193+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:54.658344+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:55.658503+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:56.658689+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:57.658844+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:58.659002+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:45:59.659212+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:00.659563+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:01.659689+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:02.659837+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 25354240 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:03.660003+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 25346048 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:04.660234+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 25280512 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'config diff' '{prefix=config diff}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'config show' '{prefix=config show}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'counter dump' '{prefix=counter dump}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'counter schema' '{prefix=counter schema}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:05.660398+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 25509888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:06.660568+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 25608192 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:07.660716+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110518272 unmapped: 25452544 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'log dump' '{prefix=log dump}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:08.660882+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110518272 unmapped: 25452544 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'perf dump' '{prefix=perf dump}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'perf schema' '{prefix=perf schema}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:09.661013+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109780992 unmapped: 26189824 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:10.661250+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109780992 unmapped: 26189824 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:11.661410+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109780992 unmapped: 26189824 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:12.661579+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 26181632 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:13.661805+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 26181632 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:14.661957+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 26181632 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:15.662092+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 26181632 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:16.662350+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 26181632 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:17.662502+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 26181632 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:18.662666+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 26181632 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:19.662856+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 26181632 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:20.663030+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 26181632 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:21.663245+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 26181632 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:22.663394+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:23.663558+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:24.664002+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:25.664129+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:26.664296+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:27.664428+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:28.664580+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:29.664711+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:30.664851+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:31.665058+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:32.665204+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:33.665346+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:34.665472+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:35.665630+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:36.665800+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:37.665932+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:38.666101+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:39.666284+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:40.666445+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:41.666578+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:42.666814+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:43.666952+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:44.667070+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:45.667197+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:46.667401+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:47.667630+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:48.667781+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:49.667902+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:50.668042+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:51.668267+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:52.668439+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:53.668560+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:54.668826+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:55.669007+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:56.669198+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:57.669331+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:58.669498+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:46:59.669666+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:00.669895+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:01.670122+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:02.670355+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:03.670533+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:04.670791+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:05.671023+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:06.671316+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:07.671730+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:08.671939+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:09.672131+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:10.672354+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:11.672599+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:12.672827+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:13.673058+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:14.673369+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:15.673532+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:16.673792+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 26173440 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:17.674513+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:18.675131+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:19.675854+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:20.677032+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:21.677574+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:22.677791+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:23.678081+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:24.678372+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:25.678628+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:26.678854+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:27.679027+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:28.679366+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:29.679537+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:30.680703+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:31.681603+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:32.681996+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:33.682202+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:34.682489+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:35.683095+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:36.683312+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:37.683487+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:38.684539+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:39.684763+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:40.685301+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:41.685447+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:42.685605+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:43.685806+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:44.686000+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:45.686155+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:46.687671+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:47.687915+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:48.688073+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:49.688257+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:50.688451+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:51.688612+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:52.688875+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:53.689048+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:54.689407+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:55.689591+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:56.689821+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:57.690071+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:58.690416+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:47:59.690561+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:00.690817+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:01.691257+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:02.691624+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:03.691964+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:04.692252+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:05.692433+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:06.692848+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:07.693200+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:08.693553+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:09.693870+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:10.694280+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:11.694566+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:12.694837+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:13.695096+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:14.695312+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:15.695485+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:16.695721+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:17.695909+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:18.696122+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:19.696358+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:20.696531+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:21.696793+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 26132480 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:22.696962+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:23.697217+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:24.697391+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:25.697598+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:26.697882+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:27.698064+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18441 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:28.698367+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:29.698586+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:30.698838+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:31.699090+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:32.699230+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:33.699404+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:34.699653+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:35.699883+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:36.700167+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:37.700405+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:38.700591+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:39.700808+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:40.700969+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:41.701231+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:42.701432+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:43.701712+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:44.701963+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:45.702213+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:46.702493+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:47.702629+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:48.702800+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:49.702972+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:50.703255+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:51.703449+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:52.703634+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:53.703891+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:54.704075+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:55.704218+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:56.704424+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:57.704634+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:58.704837+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:48:59.705220+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:00.705481+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:01.705675+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:02.705980+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:03.706201+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:04.706468+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:05.706772+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:06.707061+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:07.707331+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:08.707514+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:09.707726+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:10.707967+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:11.708151+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:12.708391+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:13.708632+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:14.708802+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:15.709012+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:16.709250+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:17.709444+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:18.709695+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:19.709909+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:20.710081+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:21.710275+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:22.710524+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:23.710777+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:24.710931+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:25.711154+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:26.711350+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:27.711507+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:28.711634+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:29.711823+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:30.711994+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:31.712105+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:32.712291+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:33.712473+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:34.712653+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:35.712850+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:36.713071+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:37.713222+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:38.713417+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:39.713614+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:40.713834+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:41.714023+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:42.714191+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:43.714350+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:44.714508+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:45.714622+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:46.714818+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:47.715000+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:48.715172+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:49.715322+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:50.715441+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:51.715576+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:52.715802+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:53.715940+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:54.716111+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:55.716270+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:56.716526+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:57.716693+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:58.716850+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:49:59.717010+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:00.717188+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:01.717330+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:02.717508+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:03.717652+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:04.717838+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:05.718035+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:06.718214+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:07.718322+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:08.718459+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:09.718668+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:10.718890+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:11.719036+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:12.719184+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:13.719358+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:14.719576+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:15.719726+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:16.719945+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:17.720113+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:18.720273+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:19.720421+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:20.720646+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:21.720806+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:22.720995+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:23.721166+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:24.721351+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:25.721490+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:26.721694+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:27.721889+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:28.722044+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:29.722201+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:30.722362+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:31.722538+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:32.722709+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:33.722865+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:34.723041+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:35.723208+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:36.723376+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:37.723501+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:38.723631+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:39.723788+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:40.723916+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:41.724077+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:42.724232+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:43.724376+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:44.724531+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:45.724705+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:46.724977+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:47.725084+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:48.725213+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:49.725420+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:50.725577+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2859 syncs, 3.66 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1587 writes, 5265 keys, 1587 commit groups, 1.0 writes per commit group, ingest: 5.55 MB, 0.01 MB/s
                                           Interval WAL: 1587 writes, 663 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:51.725783+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:52.725958+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:53.726110+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:54.726355+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:55.726535+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:56.726773+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 26542080 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:57.726923+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 26542080 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:58.727096+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 26542080 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:50:59.727240+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 26542080 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:00.727432+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:01.727587+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:02.727871+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:03.728144+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:04.728354+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:05.728510+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:06.728714+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:07.728915+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:08.729068+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:09.729689+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:10.730119+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:11.730518+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:12.730817+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:13.731363+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 26533888 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:14.731537+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:15.731972+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:16.732195+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:17.732449+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:18.732840+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:19.733104+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:20.733331+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:21.733501+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 ms_handle_reset con 0x55ba9551b800 session 0x55ba975012c0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: handle_auth_request added challenge on 0x55ba972a1400
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:22.733708+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:23.733876+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:24.734108+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:25.734336+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 26525696 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:26.734532+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:27.734801+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:28.734961+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:29.735168+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:30.735488+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:31.735673+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:32.735890+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:33.736019+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:34.736177+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:35.736348+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:36.736521+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:37.736886+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:38.737075+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:39.737308+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:40.737559+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 26517504 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:41.737810+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:42.738000+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:43.738228+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:44.738473+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:45.738645+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:46.738917+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:47.739222+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:48.739400+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:49.739553+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:50.739774+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:51.739962+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:52.740192+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:53.740382+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:54.740618+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:55.740852+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:56.741143+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 26509312 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:57.741360+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 26501120 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:58.741564+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 26501120 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:51:59.741857+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 26501120 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:00.742056+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 26501120 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:01.742226+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 26501120 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:02.742434+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 26501120 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:03.742606+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:04.742830+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 26501120 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:05.742998+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 26501120 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:06.743280+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 26501120 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:07.744073+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 26492928 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:08.744574+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 26492928 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:09.744818+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 26492928 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:10.744977+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 26492928 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:11.745479+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 26492928 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:12.745663+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 26484736 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149589 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:13.745834+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 26484736 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:14.745992+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 455.085784912s of 455.170593262s, submitted: 26
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 26484736 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:15.746454+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 26484736 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:16.746810+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109551616 unmapped: 26419200 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:17.746963+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109764608 unmapped: 26206208 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:18.747562+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:19.748010+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:20.748233+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:21.748514+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:22.748804+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:23.748993+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:24.749286+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:25.749463+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:26.750105+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:27.750487+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:28.750774+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:29.751145+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:30.751459+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:31.751804+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:32.751980+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:33.752193+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:34.752392+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:35.752662+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:36.752905+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 26165248 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:37.753066+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:38.753238+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:39.753442+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:40.753645+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:41.753840+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:42.754061+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:43.754244+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:44.754464+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109813760 unmapped: 26157056 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:45.754707+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:46.754968+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:47.755180+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:48.755370+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:49.755631+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:50.755979+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:51.756196+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:52.756555+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:53.756842+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:54.757028+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109821952 unmapped: 26148864 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:55.757360+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:56.757629+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:57.757878+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:58.758042+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:52:59.758239+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:00.758473+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:01.758697+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:02.758866+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:03.759032+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:04.759262+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:05.759507+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:06.759784+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:07.760003+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:08.760189+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:09.760387+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:10.760599+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:11.760855+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:12.761226+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:13.761407+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:14.761795+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:15.761993+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:16.762180+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:17.762785+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:18.762944+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:19.763181+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:20.763324+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:21.763536+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:22.763835+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:23.764047+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:24.764617+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:25.764930+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:26.765263+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:27.765481+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:28.765734+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:29.766012+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:30.766323+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:31.766543+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 26140672 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:32.766830+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:33.767033+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:34.767331+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:35.767539+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:36.767808+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:37.768035+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:38.768224+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:39.768461+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:40.768636+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:41.768826+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:42.769066+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109846528 unmapped: 26124288 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:43.769329+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:44.769491+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:45.769722+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:46.769964+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:47.770129+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:48.770299+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:49.770464+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:50.770602+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:51.770812+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:52.771036+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 26116096 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:53.771268+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:54.771498+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:55.771672+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:56.771918+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:57.772086+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:58.772346+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:53:59.772506+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:00.772687+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:01.772891+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:02.773122+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:03.773280+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 26107904 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:04.773456+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:05.773632+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:06.773914+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:07.774068+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:08.774269+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:09.774529+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:10.774676+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:11.774914+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:12.775173+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:13.775362+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:14.775568+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:15.775847+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:16.776201+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:17.776438+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 26099712 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:18.776617+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:19.776829+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:20.777003+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:21.777158+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:22.777263+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:23.777433+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:24.777568+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:25.778046+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:26.778325+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:27.778711+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 26091520 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:28.779107+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:29.779449+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:30.779689+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:31.779859+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:32.780105+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:33.780269+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:34.780500+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:35.780846+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 26083328 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:36.781165+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:37.781398+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:38.781674+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:39.782094+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:40.782392+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:41.782676+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:42.783018+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:43.783188+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:44.783445+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:45.783717+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:46.784041+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:47.784199+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:48.784461+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:49.784625+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 26075136 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:50.784854+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:51.785080+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:52.785346+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:53.785579+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:54.785796+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:55.786027+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:56.786308+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:57.786571+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:58.786816+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:54:59.787040+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 26066944 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:00.787265+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:01.787479+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:02.787709+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:03.788000+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:04.788329+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:05.788591+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:06.788860+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:07.789123+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:08.789295+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 26058752 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:09.789461+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:10.789698+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:11.789862+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:12.790172+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:13.790382+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:14.790556+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:15.790800+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:16.791049+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:17.791278+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:18.791422+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:19.791579+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:20.791856+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:21.792126+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:22.792392+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:23.792563+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:24.792854+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 26050560 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:25.793049+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:26.816142+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:27.816292+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:28.816492+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:29.816656+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:30.816839+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:31.817052+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:32.817239+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:33.817373+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:34.817523+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:35.817660+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:36.817823+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:37.818031+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:38.818179+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:39.818304+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 26042368 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:40.818515+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:41.818649+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:42.818843+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:43.819033+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:44.819449+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:45.819663+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:46.819836+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:47.820075+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:48.820217+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 26034176 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:49.820611+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:50.820968+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets getting new tickets!
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:51.821216+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _finish_auth 0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:51.822139+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:52.821435+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:53.821945+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:54.822106+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:55.822334+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:56.822513+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 26025984 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:57.822675+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:58.822834+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:55:59.823022+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:00.823176+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:01.823323+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:02.823513+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:03.823667+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:04.823835+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:05.824654+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:06.824857+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:07.825041+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:08.825193+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:09.825373+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:10.825532+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:11.825961+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 26017792 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:12.826134+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 26009600 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:13.826351+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 26009600 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:14.826594+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 26009600 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:15.826852+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 26009600 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:16.827052+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 26009600 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:17.827273+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 26009600 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:18.827492+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 26009600 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:19.827644+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 26009600 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:20.827807+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:21.827960+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:22.828136+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:23.828302+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:24.828477+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:25.828648+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:26.828831+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:27.829011+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:28.829146+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:29.829279+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:30.829397+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26001408 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:31.829473+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 25993216 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:32.829652+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 25993216 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:33.829852+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 25993216 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:34.830013+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 25993216 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:35.830201+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 25993216 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:36.830356+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 25993216 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:37.830545+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 25993216 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:38.830675+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 25993216 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:39.830845+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:40.831032+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:41.831265+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:42.831559+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:43.831870+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:44.832101+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:45.832289+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:46.832520+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:47.832700+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:48.832847+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:49.832986+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:50.833168+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:51.833352+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 25985024 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:52.833488+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 25976832 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:53.833727+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 25976832 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:54.833926+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 25976832 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:55.834080+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 25976832 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:56.834255+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 25976832 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:57.834409+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 25976832 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:58.834564+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 25976832 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:56:59.834801+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 25976832 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:00.835049+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 25976832 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:01.835318+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 25968640 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:02.835506+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 25968640 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:03.835691+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 25968640 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:04.835876+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 25968640 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:05.836034+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 25968640 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:06.836191+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 25968640 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:07.836353+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 25968640 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:08.836475+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:09.836610+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:10.836789+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:11.836969+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:12.837093+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:13.837281+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:14.837406+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:15.837550+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:16.837696+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:17.837844+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0x974de4/0xa2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:18.837976+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'config diff' '{prefix=config diff}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'config show' '{prefix=config show}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'counter dump' '{prefix=counter dump}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'counter schema' '{prefix=counter schema}'
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 25960448 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 11:57:51 compute-0 ceph-osd[83123]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 11:57:51 compute-0 ceph-osd[83123]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149297 data_alloc: 218103808 data_used: 196608
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:19.838102+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 25829376 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: tick
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_tickets
Feb 02 11:57:51 compute-0 ceph-osd[83123]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T11:57:20.838234+0000)
Feb 02 11:57:51 compute-0 ceph-osd[83123]: prioritycache tune_memory target: 4294967296 mapped: 110526464 unmapped: 25444352 heap: 135970816 old mem: 2845415832 new mem: 2845415832
Feb 02 11:57:51 compute-0 ceph-osd[83123]: do_command 'log dump' '{prefix=log dump}'
Feb 02 11:57:51 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18456 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 02 11:57:51 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389211160' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28325 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27886 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:51 compute-0 nova_compute[251290]: 2026-02-02 11:57:51.914 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:52 compute-0 nova_compute[251290]: 2026-02-02 11:57:52.039 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:57:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.105546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033472105614, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1464, "num_deletes": 255, "total_data_size": 2585925, "memory_usage": 2631264, "flush_reason": "Manual Compaction"}
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033472117199, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2533627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36860, "largest_seqno": 38323, "table_properties": {"data_size": 2526619, "index_size": 3951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16071, "raw_average_key_size": 20, "raw_value_size": 2512058, "raw_average_value_size": 3220, "num_data_blocks": 171, "num_entries": 780, "num_filter_entries": 780, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033345, "oldest_key_time": 1770033345, "file_creation_time": 1770033472, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 11683 microseconds, and 3661 cpu microseconds.
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.117238) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2533627 bytes OK
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.117257) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.126568) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.126614) EVENT_LOG_v1 {"time_micros": 1770033472126606, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.126644) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2579238, prev total WAL file size 2579238, number of live WAL files 2.
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.127290) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2474KB)], [80(12MB)]
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033472127320, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15195766, "oldest_snapshot_seqno": -1}
Feb 02 11:57:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 11:57:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/739052333' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18474 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28346 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6986 keys, 15043247 bytes, temperature: kUnknown
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033472261803, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 15043247, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14997533, "index_size": 27170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 183766, "raw_average_key_size": 26, "raw_value_size": 14872520, "raw_average_value_size": 2128, "num_data_blocks": 1071, "num_entries": 6986, "num_filter_entries": 6986, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770030598, "oldest_key_time": 0, "file_creation_time": 1770033472, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d3f2a2d-cae1-4d7e-a420-44d61e6b143d", "db_session_id": "2U6BFZW95GLJ0BZKEBVK", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.262179) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15043247 bytes
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.264315) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.9 rd, 111.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.1 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 7512, records dropped: 526 output_compression: NoCompression
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.264350) EVENT_LOG_v1 {"time_micros": 1770033472264336, "job": 46, "event": "compaction_finished", "compaction_time_micros": 134598, "compaction_time_cpu_micros": 24863, "output_level": 6, "num_output_files": 1, "total_output_size": 15043247, "num_input_records": 7512, "num_output_records": 6986, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033472264907, "job": 46, "event": "table_file_deletion", "file_number": 82}
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033472266167, "job": 46, "event": "table_file_deletion", "file_number": 80}
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.127222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.266317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.266327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.266330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.266332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:57:52 compute-0 ceph-mon[74676]: rocksdb: (Original Log Time 2026/02/02-11:57:52.266334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1500822177' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.18426 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/123205715' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.28304 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.18441 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1204657064' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3674137321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.18456 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2389211160' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.28325 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/136806722' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.27886 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3684037774' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/739052333' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27916 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18489 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28370 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 02 11:57:52 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2852779353' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:57:52 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:52 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:52 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:52.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:52 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27943 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:52 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:53 compute-0 nova_compute[251290]: 2026-02-02 11:57:53.019 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:57:53 compute-0 crontab[293480]: (root) LIST (root)
Feb 02 11:57:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18510 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28391 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:53 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:53 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:53.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27961 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 02 11:57:53 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2593335816' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.18474 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.28346 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.27916 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3188563917' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/193320231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2071210317' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.18489 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.28370 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2852779353' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.27943 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2116869329' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3360868604' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.18510 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2593335816' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3779043981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28418 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18534 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.27988 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:53.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:57:53 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:53.599Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:53 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 02 11:57:53 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1728863171' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28433 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18555 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:53 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28003 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18585 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28451 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28030 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.28391 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.27961 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1457752885' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.28418 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.18534 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.27988 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1728863171' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3575013387' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.28433 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.18555 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.28003 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2531024912' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/299750893' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1321110862' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Feb 02 11:57:54 compute-0 nova_compute[251290]: 2026-02-02 11:57:54.462 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb 02 11:57:54 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2436793199' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28463 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18609 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:54 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:54 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:54 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:54.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:54 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:54 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Feb 02 11:57:54 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471670733' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28484 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:55 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:55 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:55.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb 02 11:57:55 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994972666' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Feb 02 11:57:55 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3392005551' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 02 11:57:55 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4146272998' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28505 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.18585 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.28451 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.28030 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2436793199' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/4014752184' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.28463 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.18609 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1441649314' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3471670733' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2059724171' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.28484 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3994972666' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2725177838' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3392005551' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3859575555' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4146272998' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Feb 02 11:57:55 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2531454693' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Feb 02 11:57:55 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Feb 02 11:57:55 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677277267' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:57:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:57:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:55 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:57:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Feb 02 11:57:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797599339' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:57:56 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:57:56 compute-0 systemd[1]: Starting Hostname Service...
Feb 02 11:57:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Feb 02 11:57:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2977091195' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Feb 02 11:57:56 compute-0 systemd[1]: Started Hostname Service.
Feb 02 11:57:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb 02 11:57:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905135533' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Feb 02 11:57:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3189434354' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.28505 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2531454693' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/677277267' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3732519777' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2843070017' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3784396885' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1932622507' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3797599339' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2977091195' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4140539204' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2502428268' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/609625264' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/616835228' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2905135533' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3189434354' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:56 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000028s ======
Feb 02 11:57:56 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:56.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb 02 11:57:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 02 11:57:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/60632523' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:57:56 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:56 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Feb 02 11:57:56 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2932642421' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Feb 02 11:57:56 compute-0 nova_compute[251290]: 2026-02-02 11:57:56.915 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:56 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-mgr-compute-0-dhyzzj[74965]: ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:56] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb 02 11:57:56 compute-0 ceph-mgr[74969]: [prometheus INFO cherrypy.access.139899494223152] ::ffff:192.168.122.100 - - [02/Feb/2026:11:57:56] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb 02 11:57:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:57:57 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:57 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:57 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:57.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Feb 02 11:57:57 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365489162' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:57.286Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:57:57 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:57.287Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:57:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Feb 02 11:57:57 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2998286102' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28174 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2964921363' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1982980017' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/793057739' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/60632523' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2932642421' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1987892967' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4192882461' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2841064469' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/222682398' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/365489162' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/2998286102' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2470645350' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4207625072' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18786 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb 02 11:57:57 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1562303958' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Feb 02 11:57:57 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/47478669' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28204 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:57 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28210 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18819 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18825 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28234 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.28174 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2649214418' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.18786 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1562303958' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/47478669' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1910823585' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.28204 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.28210 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1269933563' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1951335242' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/3552833408' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18855 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:58 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:58 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:57:58 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:57:58.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:57:58 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28249 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:58 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:58 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28673 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18882 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:59.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:57:59 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:57:59.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28682 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:57:59 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:57:59 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:57:59.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:57:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Feb 02 11:57:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979803058' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28261 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28691 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18894 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 nova_compute[251290]: 2026-02-02 11:57:59.464 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28697 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Feb 02 11:57:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/645224440' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.18819 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.18825 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.28234 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.18855 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/500775605' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.28249 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/2664604464' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.28673 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.18882 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3979803058' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3536006499' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/645224440' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28285 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb 02 11:57:59 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18915 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28718 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 11:57:59 compute-0 ceph-mgr[74969]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 11:58:00 compute-0 nova_compute[251290]: 2026-02-02 11:58:00.018 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:58:00 compute-0 nova_compute[251290]: 2026-02-02 11:58:00.019 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 11:58:00 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Feb 02 11:58:00 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3061313002' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:58:00 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28300 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:00 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18933 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:00 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:58:00 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:58:00 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:58:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:58:01 compute-0 nova_compute[251290]: 2026-02-02 11:58:01.020 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:58:01 compute-0 nova_compute[251290]: 2026-02-02 11:58:01.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 11:58:01 compute-0 nova_compute[251290]: 2026-02-02 11:58:01.020 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 11:58:01 compute-0 nova_compute[251290]: 2026-02-02 11:58:01.051 251294 DEBUG nova.compute.manager [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 11:58:01 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:58:01 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:58:01 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:58:01.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:58:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:58:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb 02 11:58:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:58:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb 02 11:58:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:58:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb 02 11:58:01 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28318 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 967 B/s rd, 0 op/s
Feb 02 11:58:01 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-nfs-cephfs-2-0-compute-0-lrvhze[260429]: 02/02/2026 11:58:01 : epoch 69808c4d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb 02 11:58:01 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28736 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319945197' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.28682 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.28261 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.28691 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.18894 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.28697 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.28285 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='mgr.14730 192.168.122.100:0/1107015052' entity='mgr.compute-0.dhyzzj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.18915 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3580290386' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.28718 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3061313002' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.28300 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1154189102' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:01 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.18960 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:01 compute-0 nova_compute[251290]: 2026-02-02 11:58:01.917 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:01 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:01 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28778 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 11:58:02 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28814 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='client.18933 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/2392375415' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/280657184' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1287010409' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='client.28318 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 967 B/s rd, 0 op/s
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='client.28736 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1319945197' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1970418544' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='client.18960 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='client.28778 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:02 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:58:02 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:58:02 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:58:02.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb 02 11:58:02 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28877 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:02 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Feb 02 11:58:02 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398281658' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb 02 11:58:03 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28459 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:58:03 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:58:03 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.001000029s ======
Feb 02 11:58:03 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.102 - anonymous [02/Feb/2026:11:58:03.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Feb 02 11:58:03 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.19107 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:58:03 compute-0 ceph-mgr[74969]: log_channel(audit) log [DBG] : from='client.28910 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:58:03 compute-0 ceph-mgr[74969]: log_channel(cluster) log [DBG] : pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 967 B/s rd, 0 op/s
Feb 02 11:58:03 compute-0 ceph-1d33f80b-d6ca-501c-bac7-184379b89279-alertmanager-compute-0[104833]: ts=2026-02-02T11:58:03.600Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb 02 11:58:03 compute-0 ceph-mon[74676]: from='client.28814 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:03 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 11:58:03 compute-0 ceph-mon[74676]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 11:58:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/760934949' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb 02 11:58:03 compute-0 ceph-mon[74676]: from='client.28877 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 11:58:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/3398281658' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb 02 11:58:03 compute-0 ceph-mon[74676]: from='client.28459 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:58:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/4272065626' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb 02 11:58:03 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/3690724531' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Feb 02 11:58:03 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Feb 02 11:58:03 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013438172' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Feb 02 11:58:04 compute-0 nova_compute[251290]: 2026-02-02 11:58:04.046 251294 DEBUG oslo_service.periodic_task [None req-1edfc164-0a7b-4174-a5b4-8db400e04ba6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 11:58:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Feb 02 11:58:04 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847945580' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Feb 02 11:58:04 compute-0 nova_compute[251290]: 2026-02-02 11:58:04.467 251294 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 11:58:04 compute-0 ceph-mon[74676]: from='client.19107 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:58:04 compute-0 ceph-mon[74676]: from='client.28910 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 11:58:04 compute-0 ceph-mon[74676]: pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 967 B/s rd, 0 op/s
Feb 02 11:58:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/4013438172' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Feb 02 11:58:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/1472567787' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Feb 02 11:58:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/522899503' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Feb 02 11:58:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.100:0/1847945580' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Feb 02 11:58:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.101:0/381592267' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Feb 02 11:58:04 compute-0 ceph-mon[74676]: from='client.? 192.168.122.102:0/1343039218' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Feb 02 11:58:04 compute-0 ceph-mon[74676]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Feb 02 11:58:04 compute-0 ceph-mon[74676]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832106262' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Feb 02 11:58:04 compute-0 radosgw[89826]: ====== starting new request req=0x7fe00b2405d0 =====
Feb 02 11:58:04 compute-0 radosgw[89826]: ====== req done req=0x7fe00b2405d0 op status=0 http_status=200 latency=0.000000000s ======
Feb 02 11:58:04 compute-0 radosgw[89826]: beast: 0x7fe00b2405d0: 192.168.122.100 - anonymous [02/Feb/2026:11:58:04.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
